1 Mylex DAC960/DAC1100 PCI RAID Controller Driver for Linux
3 Version 2.2.4 for Linux 2.2.11
4 Version 2.0.4 for Linux 2.0.37
14 Copyright 1998-1999 by Leonard N. Zubkoff <lnz@dandelion.com>
19 Mylex, Inc. designs and manufactures a variety of high performance PCI RAID
20 controllers. Mylex Corporation is located at 34551 Ardenwood Blvd., Fremont,
21 California 94555, USA and can be reached at 510/796-6100 or on the World Wide
22 Web at http://www.mylex.com. Mylex RAID Technical Support can be reached by
23 electronic mail at support@mylex.com (for eXtremeRAID 1100 and older DAC960
24 models) or techsup@mylex.com (for AcceleRAID models), by voice at 510/608-2400,
25 or by FAX at 510/745-7715. Contact information for offices in Europe and Japan
26 is available on the Web site.
28 The latest information on Linux support for DAC960 PCI RAID Controllers, as
29 well as the most recent release of this driver, will always be available from
30 my Linux Home Page at URL "http://www.dandelion.com/Linux/". The Linux DAC960
31 driver supports all current DAC960 PCI family controllers including the
32 AcceleRAID models, as well as the eXtremeRAID 1100; see below for a complete
33 list. For simplicity, in most places this documentation refers to DAC960
34 generically rather than explicitly listing all the models.
36 Bug reports should be sent via electronic mail to "lnz@dandelion.com". Please
37 include with the bug report the complete configuration messages reported by the
38 driver at startup, along with any subsequent system messages relevant to the
39 controller's operation, and a detailed description of your system's hardware
42 Please consult the DAC960 RAID controller documentation for detailed
43 information regarding installation and configuration of the controllers. This
44 document primarily provides information specific to the Linux DAC960 support.
49 The DAC960 RAID controllers are supported solely as high performance RAID
50 controllers, not as interfaces to arbitrary SCSI devices. The Linux DAC960
51 driver operates at the block device level, the same level as the SCSI and IDE
52 drivers. Unlike other RAID controllers currently supported on Linux, the
53 DAC960 driver is not dependent on the SCSI subsystem, and hence avoids all the
54 complexity and unnecessary code that would be associated with an implementation
55 as a SCSI driver. The DAC960 driver is designed for as high a performance as
56 possible with no compromises or extra code for compatibility with lower
57 performance devices. The DAC960 driver includes extensive error logging and
58 online configuration management capabilities. Except for initial configuration
59 of the controller and adding new disk drives, most everything can be handled
60 from Linux while the system is operational.
62 The DAC960 driver is architected to support up to 8 controllers per system.
63 Each DAC960 controller can support up to 15 disk drives per channel, for a
64 maximum of 45 drives on a three channel controller. The drives installed on a
65 controller are divided into one or more "Drive Groups", and then each Drive
66 Group is subdivided further into 1 to 32 "Logical Drives". Each Logical Drive
67 has a specific RAID Level and caching policy associated with it, and it appears
68 to Linux as a single block device. Logical Drives are further subdivided into
69 up to 7 partitions through the normal Linux and PC disk partitioning schemes.
70 Logical Drives are also known as "System Drives", and Drive Groups are also
71 called "Packs". Both terms are in use in the Mylex documentation; I have
72 chosen to standardize on the more generic "Logical Drive" and "Drive Group".
74 DAC960 RAID disk devices are named in the style of the Device File System
75 (DEVFS). The device corresponding to Logical Drive D on Controller C is
76 referred to as /dev/rd/cCdD, and the partitions are called /dev/rd/cCdDp1
77 through /dev/rd/cCdDp7. For example, partition 3 of Logical Drive 5 on
78 Controller 2 is referred to as /dev/rd/c2d5p3. Note that unlike with SCSI
79 disks the device names will not change in the event of a disk drive failure.
80 The DAC960 driver is assigned major numbers 48 - 55 with one major number per
81 controller. The 8 bits of minor number are divided into 5 bits for the Logical
82 Drive and 3 bits for the partition.
85 SUPPORTED DAC960/DAC1100 PCI RAID CONTROLLERS
87 The following list comprises the supported DAC960 and DAC1100 PCI RAID
88 Controllers as of the date of this document. It is recommended that anyone
89 purchasing a Mylex PCI RAID Controller not in the following table contact the
90 author beforehand to verify that it is or will be supported.
92 eXtremeRAID 1100 (DAC1164P)
93 3 Wide Ultra-2/LVD SCSI channels
94 233MHz StrongARM SA 110 Processor
95 64 Bit PCI (backward compatible with 32 Bit PCI slots)
96 16MB/32MB/64MB Parity SDRAM Memory with Battery Backup
98 AcceleRAID 250 (DAC960PTL1)
99 Uses onboard Symbios SCSI chips on certain motherboards
100 Also includes one onboard Wide Ultra-2/LVD SCSI Channel
101 66MHz Intel i960RD RISC Processor
102 4MB/8MB/16MB/32MB/64MB/128MB ECC EDO Memory
104 AcceleRAID 200 (DAC960PTL0)
105 Uses onboard Symbios SCSI chips on certain motherboards
106 Includes no onboard SCSI Channels
107 66MHz Intel i960RD RISC Processor
108 4MB/8MB/16MB/32MB/64MB/128MB ECC EDO Memory
110 AcceleRAID 150 (DAC960PRL)
111 Uses onboard Symbios SCSI chips on certain motherboards
112 Also includes one onboard Wide Ultra-2/LVD SCSI Channel
113 33MHz Intel i960RP RISC Processor
114 4MB Parity EDO Memory
116 DAC960PJ 1/2/3 Wide Ultra SCSI-3 Channels
117 66MHz Intel i960RD RISC Processor
118 4MB/8MB/16MB/32MB/64MB/128MB ECC EDO Memory
120 DAC960PG 1/2/3 Wide Ultra SCSI-3 Channels
121 33MHz Intel i960RP RISC Processor
122 4MB/8MB ECC EDO Memory
124 DAC960PU 1/2/3 Wide Ultra SCSI-3 Channels
125 Intel i960CF RISC Processor
126 4MB/8MB EDRAM or 2MB/4MB/8MB/16MB/32MB DRAM Memory
128 DAC960PD 1/2/3 Wide Fast SCSI-2 Channels
129 Intel i960CF RISC Processor
130 4MB/8MB EDRAM or 2MB/4MB/8MB/16MB/32MB DRAM Memory
132 DAC960PL 1/2/3 Wide Fast SCSI-2 Channels
133 Intel i960 RISC Processor
134 2MB/4MB/8MB/16MB/32MB DRAM Memory
136 For the eXtremeRAID 1100, firmware version 5.06-0-52 or above is required.
138 For the AcceleRAID 250, 200, and 150, firmware version 4.06-0-57 or above is
141 For the DAC960PJ and DAC960PG, firmware version 4.06-0-00 or above is required.
143 For the DAC960PU, DAC960PD, and DAC960PL, firmware version 3.51-0-04 or above
146 Note that earlier revisions of the DAC960PU, DAC960PD, and DAC960PL controllers
147 were delivered with version 2.xx firmware. Version 2.xx firmware is not
148 supported by this driver and no support is envisioned. Contact Mylex RAID
149 Technical Support to inquire about upgrading controllers with version 2.xx
150 firmware to version 3.51-0-04. Upgrading to version 3.xx firmware requires
151 installation of higher capacity Flash ROM chips, and not all DAC960PD and
152 DAC960PL controllers can be upgraded.
154 Please note that not all SCSI disk drives are suitable for use with DAC960
155 controllers, and only particular firmware versions of any given model may
156 actually function correctly. Similarly, not all motherboards have a BIOS that
157 properly initializes the AcceleRAID 250, AcceleRAID 200, AcceleRAID 150,
158 DAC960PJ, and DAC960PG because the Intel i960RD/RP is a multi-function device.
159 If in doubt, contact Mylex RAID Technical Support (support@mylex.com) to verify
160 compatibility. Mylex makes available a hard disk compatibility list by FTP at
161 ftp://ftp.mylex.com/pub/dac960/diskcomp.html.
166 This distribution was prepared for Linux kernel version 2.2.11 or 2.0.37.
168 To install the DAC960 RAID driver, you may use the following commands,
169 replacing "/usr/src" with wherever you keep your Linux kernel source tree:
172 tar -xvzf DAC960-2.2.4.tar.gz (or DAC960-2.0.4.tar.gz)
173 mv README.DAC960 linux/Documentation
174 mv DAC960.[ch] linux/drivers/block
175 patch -p0 < DAC960.patch
179 make bzImage (or zImage)
181 Then install "arch/i386/boot/bzImage" or "arch/i386/boot/zImage" as your
182 standard kernel, run lilo if appropriate, and reboot.
184 To create the necessary devices in /dev, the "make_rd" script included in
185 "DAC960-Utilities.tar.gz" from http://www.dandelion.com/Linux/ may be used.
186 LILO 21 and FDISK v2.9 include DAC960 support; also included in this archive
187 are patches to LILO 20 and FDISK v2.8 that add DAC960 support, along with
188 statically linked executables of LILO and FDISK. This modified version of LILO
189 will allow booting from a DAC960 controller and/or mounting the root file
190 system from a DAC960.
192 Red Hat Linux 6.0 and SuSE Linux 6.1 include support for Mylex PCI RAID
193 controllers. Installing directly onto a DAC960 may be problematic from other
194 Linux distributions until their installation utilities are updated.
199 Before installing Linux or adding DAC960 logical drives to an existing Linux
200 system, the controller must first be configured to provide one or more logical
201 drives using the BIOS Configuration Utility or DACCF. Please note that since
202 there are only at most 6 usable partitions on each logical drive, systems
203 requiring more partitions should subdivide a drive group into multiple logical
204 drives, each of which can have up to 6 partitions. Also, note that with large
205 disk arrays it is advisable to enable the 8GB BIOS Geometry (255/63) rather
206 than accepting the default 2GB BIOS Geometry (128/32); failing to so do will
207 cause the logical drive geometry to have more than 65535 cylinders which will
208 make it impossible for FDISK to be used properly. The 8GB BIOS Geometry can be
209 enabled by configuring the DAC960 BIOS, which is accessible via Alt-M during
210 the BIOS initialization sequence.
212 For maximum performance and the most efficient E2FSCK performance, it is
213 recommended that EXT2 file systems be built with a 4KB block size and 16 block
214 stride to match the DAC960 controller's 64KB default stripe size. The command
215 "mke2fs -b 4096 -R stride=16 <device>" is appropriate. Unless there will be a
216 large number of small files on the file systems, it is also beneficial to add
217 the "-i 16384" option to increase the bytes per inode parameter thereby
218 reducing the file system metadata. Finally, on systems that will only be run
219 with Linux 2.2 or later kernels it is beneficial to enable sparse superblocks
220 with the "-s 1" option.
223 DAC960 ANNOUNCEMENTS MAILING LIST
225 The DAC960 Announcements Mailing List provides a forum for informing Linux
226 users of new driver releases and other announcements regarding Linux support
227 for DAC960 PCI RAID Controllers. To join the mailing list, send a message to
228 "dac960-announce-request@dandelion.com" with the line "subscribe" in the
232 CONTROLLER CONFIGURATION AND STATUS MONITORING
234 The DAC960 RAID controllers running firmware 4.06 or above include a Background
235 Initialization facility so that system downtime is minimized both for initial
236 installation and subsequent configuration of additional storage. The BIOS
237 Configuration Utility (accessible via Alt-R during the BIOS initialization
238 sequence) is used to quickly configure the controller, and then the logical
239 drives that have been created are available for immediate use even while they
240 are still being initialized by the controller. The primary need for online
241 configuration and status monitoring is then to avoid system downtime when disk
242 drives fail and must be replaced. Mylex's online monitoring and configuration
243 utilities are being ported to Linux and will become available at some point in
244 the future. Note that with a SAF-TE (SCSI Accessed Fault-Tolerant Enclosure)
245 enclosure, the controller is able to rebuild failed drives automatically as
246 soon as a drive replacement is made available.
248 The primary interfaces for controller configuration and status monitoring are
249 special files created in the /proc/rd/... hierarchy along with the normal
250 system console logging mechanism. Whenever the system is operating, the DAC960
251 driver queries each controller for status information every 10 seconds, and
252 checks for additional conditions every 60 seconds. The initial status of each
253 controller is always available for controller N in /proc/rd/cN/initial_status,
254 and the current status as of the last status monitoring query is available in
255 /proc/rd/cN/current_status. In addition, status changes are also logged by the
256 driver to the system console and will appear in the log files maintained by
257 syslog. The progress of asynchronous rebuild or consistency check operations
258 is also available in /proc/rd/cN/current_status, and progress messages are
259 logged to the system console at most every 60 seconds.
261 Starting with the 2.2.3/2.0.3 versions of the driver, the status information
262 available in /proc/rd/cN/initial_status and /proc/rd/cN/current_status has been
263 augmented to include the vendor, model, revision, and serial number (if
264 available) for each physical device found connected to the controller:
266 ***** DAC960 RAID Driver Version 2.2.3 of 19 August 1999 *****
267 Copyright 1998-1999 by Leonard N. Zubkoff <lnz@dandelion.com>
268 Configuring Mylex DAC960PRL PCI RAID Controller
269 Firmware Version: 4.07-0-07, Channels: 1, Memory Size: 16MB
270 PCI Bus: 1, Device: 4, Function: 1, I/O Address: Unassigned
271 PCI Address: 0xFE300000 mapped at 0xA0800000, IRQ Channel: 21
272 Controller Queue Depth: 128, Maximum Blocks per Command: 128
273 Driver Queue Depth: 127, Maximum Scatter/Gather Segments: 33
274 Stripe Size: 64KB, Segment Size: 8KB, BIOS Geometry: 255/63
275 SAF-TE Enclosure Management Enabled
277 0:0 Vendor: IBM Model: DRVS09D Revision: 0270
278 Serial Number: 68016775HA
279 Disk Status: Online, 17928192 blocks
280 0:1 Vendor: IBM Model: DRVS09D Revision: 0270
281 Serial Number: 68004E53HA
282 Disk Status: Online, 17928192 blocks
283 0:2 Vendor: IBM Model: DRVS09D Revision: 0270
284 Serial Number: 13013935HA
285 Disk Status: Online, 17928192 blocks
286 0:3 Vendor: IBM Model: DRVS09D Revision: 0270
287 Serial Number: 13016897HA
288 Disk Status: Online, 17928192 blocks
289 0:4 Vendor: IBM Model: DRVS09D Revision: 0270
290 Serial Number: 68019905HA
291 Disk Status: Online, 17928192 blocks
292 0:5 Vendor: IBM Model: DRVS09D Revision: 0270
293 Serial Number: 68012753HA
294 Disk Status: Online, 17928192 blocks
295 0:6 Vendor: ESG-SHV Model: SCA HSBP M6 Revision: 0.61
297 /dev/rd/c0d0: RAID-5, Online, 89640960 blocks, Write Thru
298 No Rebuild or Consistency Check in Progress
300 To simplify the monitoring process for custom software, the special file
301 /proc/rd/status returns "OK" when all DAC960 controllers in the system are
302 operating normally and no failures have occurred, or "ALERT" if any logical
303 drives are offline or critical or any non-standby physical drives are dead.
305 Configuration commands for controller N are available via the special file
306 /proc/rd/cN/user_command. A human readable command can be written to this
307 special file to initiate a configuration operation, and the results of the
308 operation can then be read back from the special file in addition to being
309 logged to the system console. The shell command sequence
311 echo "<configuration-command>" > /proc/rd/c0/user_command
312 cat /proc/rd/c0/user_command
314 is typically used to execute configuration commands. The configuration
319 The "flush-cache" command flushes the controller's cache. The system
320 automatically flushes the cache at shutdown or if the driver module is
321 unloaded, so this command is only needed to be certain a write back cache
322 is flushed to disk before the system is powered off by a command to a UPS.
323 Note that the flush-cache command also stops an asynchronous rebuild or
324 consistency check, so it should not be used except when the system is being
327 kill <channel>:<target-id>
329 The "kill" command marks the physical drive <channel>:<target-id> as DEAD.
330 This command is provided primarily for testing, and should not be used
331 during normal system operation.
333 make-online <channel>:<target-id>
335 The "make-online" command changes the physical drive <channel>:<target-id>
336 from status DEAD to status ONLINE. In cases where multiple physical drives
337 have been killed simultaneously, this command may be used to bring them
338 back online, after which a consistency check is advisable.
340 Warning: make-online should only be used on a dead physical drive that is
341 an active part of a drive group, never on a standby drive.
343 make-standby <channel>:<target-id>
345 The "make-standby" command changes physical drive <channel>:<target-id>
346 from status DEAD to status STANDBY. It should only be used in cases where
347 a dead drive was replaced after an automatic rebuild was performed onto a
348 standby drive. It cannot be used to add a standby drive to the controller
349 configuration if one was not created initially; the BIOS Configuration
350 Utility must be used for that currently.
352 rebuild <channel>:<target-id>
354 The "rebuild" command initiates an asynchronous rebuild onto physical drive
355 <channel>:<target-id>. It should only be used when a dead drive has been
358 check-consistency <logical-drive-number>
360 The "check-consistency" command initiates an asynchronous consistency check
361 of <logical-drive-number> with automatic restoration. It can be used
362 whenever it is desired to verify the consistency of the redundancy
366 cancel-consistency-check
368 The "cancel-rebuild" and "cancel-consistency-check" commands cancel any
369 rebuild or consistency check operations previously initiated.
372 EXAMPLE I - DRIVE FAILURE WITHOUT A STANDBY DRIVE
374 The following annotated logs demonstrate the controller configuration and and
375 online status monitoring capabilities of the Linux DAC960 Driver. The test
376 configuration comprises 6 1GB Quantum Atlas I disk drives on two channels of a
377 DAC960PJ controller. The physical drives are configured into a single drive
378 group without a standby drive, and the drive group has been configured into two
379 logical drives, one RAID-5 and one RAID-6. Note that these logs are from an
380 earlier version of the driver and the messages have changed somewhat with newer
381 releases, but the functionality remains similar. First, here is the current
382 status of the RAID configuration:
384 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
385 ***** DAC960 RAID Driver Version 2.0.0 of 23 March 1999 *****
386 Copyright 1998-1999 by Leonard N. Zubkoff <lnz@dandelion.com>
387 Configuring Mylex DAC960PJ PCI RAID Controller
388 Firmware Version: 4.06-0-08, Channels: 3, Memory Size: 8MB
389 PCI Bus: 0, Device: 19, Function: 1, I/O Address: Unassigned
390 PCI Address: 0xFD4FC000 mapped at 0x8807000, IRQ Channel: 9
391 Controller Queue Depth: 128, Maximum Blocks per Command: 128
392 Driver Queue Depth: 127, Maximum Scatter/Gather Segments: 33
393 Stripe Size: 64KB, Segment Size: 8KB, BIOS Geometry: 255/63
395 0:1 - Disk: Online, 2201600 blocks
396 0:2 - Disk: Online, 2201600 blocks
397 0:3 - Disk: Online, 2201600 blocks
398 1:1 - Disk: Online, 2201600 blocks
399 1:2 - Disk: Online, 2201600 blocks
400 1:3 - Disk: Online, 2201600 blocks
402 /dev/rd/c0d0: RAID-5, Online, 5498880 blocks, Write Thru
403 /dev/rd/c0d1: RAID-6, Online, 3305472 blocks, Write Thru
404 No Rebuild or Consistency Check in Progress
406 gwynedd:/u/lnz# cat /proc/rd/status
409 The above messages indicate that everything is healthy, and /proc/rd/status
410 returns "OK" indicating that there are no problems with any DAC960 controller
411 in the system. For demonstration purposes, while I/O is active Physical Drive
412 1:1 is now disconnected, simulating a drive failure. The failure is noted by
413 the driver within 10 seconds of the controller's having detected it, and the
414 driver logs the following console status messages indicating that Logical
415 Drives 0 and 1 are now CRITICAL as a result of Physical Drive 1:1 being DEAD:
417 DAC960#0: Physical Drive 1:2 Error Log: Sense Key = 6, ASC = 29, ASCQ = 02
418 DAC960#0: Physical Drive 1:3 Error Log: Sense Key = 6, ASC = 29, ASCQ = 02
419 DAC960#0: Physical Drive 1:1 killed because of timeout on SCSI command
420 DAC960#0: Physical Drive 1:1 is now DEAD
421 DAC960#0: Logical Drive 0 (/dev/rd/c0d0) is now CRITICAL
422 DAC960#0: Logical Drive 1 (/dev/rd/c0d1) is now CRITICAL
424 The Sense Keys logged here are just Check Condition / Unit Attention conditions
425 arising from a SCSI bus reset that is forced by the controller during its error
426 recovery procedures. Concurrently with the above, the driver status available
427 from /proc/rd also reflects the drive failure. The status message in
428 /proc/rd/status has changed from "OK" to "ALERT":
430 gwynedd:/u/lnz# cat /proc/rd/status
433 and /proc/rd/c0/current_status has been updated:
435 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
438 0:1 - Disk: Online, 2201600 blocks
439 0:2 - Disk: Online, 2201600 blocks
440 0:3 - Disk: Online, 2201600 blocks
441 1:1 - Disk: Dead, 2201600 blocks
442 1:2 - Disk: Online, 2201600 blocks
443 1:3 - Disk: Online, 2201600 blocks
445 /dev/rd/c0d0: RAID-5, Critical, 5498880 blocks, Write Thru
446 /dev/rd/c0d1: RAID-6, Critical, 3305472 blocks, Write Thru
447 No Rebuild or Consistency Check in Progress
449 Since there are no standby drives configured, the system can continue to access
450 the logical drives in a performance degraded mode until the failed drive is
451 replaced and a rebuild operation completed to restore the redundancy of the
452 logical drives. Once Physical Drive 1:1 is replaced with a properly
453 functioning drive, or if the physical drive was killed without having failed
454 (e.g., due to electrical problems on the SCSI bus), the user can instruct the
455 controller to initiate a rebuild operation onto the newly replaced drive:
457 gwynedd:/u/lnz# echo "rebuild 1:1" > /proc/rd/c0/user_command
458 gwynedd:/u/lnz# cat /proc/rd/c0/user_command
459 Rebuild of Physical Drive 1:1 Initiated
461 The echo command instructs the controller to initiate an asynchronous rebuild
462 operation onto Physical Drive 1:1, and the status message that results from the
463 operation is then available for reading from /proc/rd/c0/user_command, as well
464 as being logged to the console by the driver.
466 Within 10 seconds of this command the driver logs the initiation of the
467 asynchronous rebuild operation:
469 DAC960#0: Rebuild of Physical Drive 1:1 Initiated
470 DAC960#0: Physical Drive 1:1 Error Log: Sense Key = 6, ASC = 29, ASCQ = 01
471 DAC960#0: Physical Drive 1:1 is now WRITE-ONLY
472 DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 1% completed
474 and /proc/rd/c0/current_status is updated:
476 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
479 0:1 - Disk: Online, 2201600 blocks
480 0:2 - Disk: Online, 2201600 blocks
481 0:3 - Disk: Online, 2201600 blocks
482 1:1 - Disk: Write-Only, 2201600 blocks
483 1:2 - Disk: Online, 2201600 blocks
484 1:3 - Disk: Online, 2201600 blocks
486 /dev/rd/c0d0: RAID-5, Critical, 5498880 blocks, Write Thru
487 /dev/rd/c0d1: RAID-6, Critical, 3305472 blocks, Write Thru
488 Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 6% completed
490 As the rebuild progresses, the current status in /proc/rd/c0/current_status is
491 updated every 10 seconds:
493 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
496 0:1 - Disk: Online, 2201600 blocks
497 0:2 - Disk: Online, 2201600 blocks
498 0:3 - Disk: Online, 2201600 blocks
499 1:1 - Disk: Write-Only, 2201600 blocks
500 1:2 - Disk: Online, 2201600 blocks
501 1:3 - Disk: Online, 2201600 blocks
503 /dev/rd/c0d0: RAID-5, Critical, 5498880 blocks, Write Thru
504 /dev/rd/c0d1: RAID-6, Critical, 3305472 blocks, Write Thru
505 Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 15% completed
507 and every minute a progress message is logged to the console by the driver:
509 DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 32% completed
510 DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 63% completed
511 DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 94% completed
512 DAC960#0: Rebuild in Progress: Logical Drive 1 (/dev/rd/c0d1) 94% completed
514 Finally, the rebuild completes successfully. The driver logs the status of the
515 logical and physical drives and the rebuild completion:
517 DAC960#0: Rebuild Completed Successfully
518 DAC960#0: Physical Drive 1:1 is now ONLINE
519 DAC960#0: Logical Drive 0 (/dev/rd/c0d0) is now ONLINE
520 DAC960#0: Logical Drive 1 (/dev/rd/c0d1) is now ONLINE
522 /proc/rd/c0/current_status is updated:
524 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
527 0:1 - Disk: Online, 2201600 blocks
528 0:2 - Disk: Online, 2201600 blocks
529 0:3 - Disk: Online, 2201600 blocks
530 1:1 - Disk: Online, 2201600 blocks
531 1:2 - Disk: Online, 2201600 blocks
532 1:3 - Disk: Online, 2201600 blocks
534 /dev/rd/c0d0: RAID-5, Online, 5498880 blocks, Write Thru
535 /dev/rd/c0d1: RAID-6, Online, 3305472 blocks, Write Thru
536 Rebuild Completed Successfully
538 and /proc/rd/status indicates that everything is healthy once again:
540 gwynedd:/u/lnz# cat /proc/rd/status
544 EXAMPLE II - DRIVE FAILURE WITH A STANDBY DRIVE
546 The following annotated logs demonstrate the controller configuration and and
547 online status monitoring capabilities of the Linux DAC960 Driver. The test
548 configuration comprises 6 1GB Quantum Atlas I disk drives on two channels of a
549 DAC960PJ controller. The physical drives are configured into a single drive
550 group with a standby drive, and the drive group has been configured into two
551 logical drives, one RAID-5 and one RAID-6. Note that these logs are from an
552 earlier version of the driver and the messages have changed somewhat with newer
553 releases, but the functionality remains similar. First, here is the current
554 status of the RAID configuration:
556 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
557 ***** DAC960 RAID Driver Version 2.0.0 of 23 March 1999 *****
558 Copyright 1998-1999 by Leonard N. Zubkoff <lnz@dandelion.com>
559 Configuring Mylex DAC960PJ PCI RAID Controller
560 Firmware Version: 4.06-0-08, Channels: 3, Memory Size: 8MB
561 PCI Bus: 0, Device: 19, Function: 1, I/O Address: Unassigned
562 PCI Address: 0xFD4FC000 mapped at 0x8807000, IRQ Channel: 9
563 Controller Queue Depth: 128, Maximum Blocks per Command: 128
564 Driver Queue Depth: 127, Maximum Scatter/Gather Segments: 33
565 Stripe Size: 64KB, Segment Size: 8KB, BIOS Geometry: 255/63
567 0:1 - Disk: Online, 2201600 blocks
568 0:2 - Disk: Online, 2201600 blocks
569 0:3 - Disk: Online, 2201600 blocks
570 1:1 - Disk: Online, 2201600 blocks
571 1:2 - Disk: Online, 2201600 blocks
572 1:3 - Disk: Standby, 2201600 blocks
574 /dev/rd/c0d0: RAID-5, Online, 4399104 blocks, Write Thru
575 /dev/rd/c0d1: RAID-6, Online, 2754560 blocks, Write Thru
576 No Rebuild or Consistency Check in Progress
578 gwynedd:/u/lnz# cat /proc/rd/status
581 The above messages indicate that everything is healthy, and /proc/rd/status
582 returns "OK" indicating that there are no problems with any DAC960 controller
583 in the system. For demonstration purposes, while I/O is active Physical Drive
584 1:2 is now disconnected, simulating a drive failure. The failure is noted by
585 the driver within 10 seconds of the controller's having detected it, and the
586 driver logs the following console status messages:
588 DAC960#0: Physical Drive 1:1 Error Log: Sense Key = 6, ASC = 29, ASCQ = 02
589 DAC960#0: Physical Drive 1:3 Error Log: Sense Key = 6, ASC = 29, ASCQ = 02
590 DAC960#0: Physical Drive 1:2 killed because of timeout on SCSI command
591 DAC960#0: Physical Drive 1:2 is now DEAD
592 DAC960#0: Physical Drive 1:2 killed because it was removed
593 DAC960#0: Logical Drive 0 (/dev/rd/c0d0) is now CRITICAL
594 DAC960#0: Logical Drive 1 (/dev/rd/c0d1) is now CRITICAL
596 Since a standby drive is configured, the controller automatically begins
597 rebuilding onto the standby drive:
599 DAC960#0: Physical Drive 1:3 is now WRITE-ONLY
600 DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 4% completed
602 Concurrently with the above, the driver status available from /proc/rd also
603 reflects the drive failure and automatic rebuild. The status message in
604 /proc/rd/status has changed from "OK" to "ALERT":
606 gwynedd:/u/lnz# cat /proc/rd/status
609 and /proc/rd/c0/current_status has been updated:
611 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
614 0:1 - Disk: Online, 2201600 blocks
615 0:2 - Disk: Online, 2201600 blocks
616 0:3 - Disk: Online, 2201600 blocks
617 1:1 - Disk: Online, 2201600 blocks
618 1:2 - Disk: Dead, 2201600 blocks
619 1:3 - Disk: Write-Only, 2201600 blocks
621 /dev/rd/c0d0: RAID-5, Critical, 4399104 blocks, Write Thru
622 /dev/rd/c0d1: RAID-6, Critical, 2754560 blocks, Write Thru
623 Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 4% completed
625 As the rebuild progresses, the current status in /proc/rd/c0/current_status is
626 updated every 10 seconds:
628 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
631 0:1 - Disk: Online, 2201600 blocks
632 0:2 - Disk: Online, 2201600 blocks
633 0:3 - Disk: Online, 2201600 blocks
634 1:1 - Disk: Online, 2201600 blocks
635 1:2 - Disk: Dead, 2201600 blocks
636 1:3 - Disk: Write-Only, 2201600 blocks
638 /dev/rd/c0d0: RAID-5, Critical, 4399104 blocks, Write Thru
639 /dev/rd/c0d1: RAID-6, Critical, 2754560 blocks, Write Thru
640 Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 40% completed
642 and every minute a progress message is logged on the console by the driver:
644 DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 40% completed
645 DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 76% completed
646 DAC960#0: Rebuild in Progress: Logical Drive 1 (/dev/rd/c0d1) 66% completed
647 DAC960#0: Rebuild in Progress: Logical Drive 1 (/dev/rd/c0d1) 84% completed
649 Finally, the rebuild completes successfully. The driver logs the status of the
650 logical and physical drives and the rebuild completion:
652 DAC960#0: Rebuild Completed Successfully
653 DAC960#0: Physical Drive 1:3 is now ONLINE
654 DAC960#0: Logical Drive 0 (/dev/rd/c0d0) is now ONLINE
655 DAC960#0: Logical Drive 1 (/dev/rd/c0d1) is now ONLINE
657 /proc/rd/c0/current_status is updated:
659 ***** DAC960 RAID Driver Version 2.0.0 of 23 March 1999 *****
660 Copyright 1998-1999 by Leonard N. Zubkoff <lnz@dandelion.com>
661 Configuring Mylex DAC960PJ PCI RAID Controller
662 Firmware Version: 4.06-0-08, Channels: 3, Memory Size: 8MB
663 PCI Bus: 0, Device: 19, Function: 1, I/O Address: Unassigned
664 PCI Address: 0xFD4FC000 mapped at 0x8807000, IRQ Channel: 9
665 Controller Queue Depth: 128, Maximum Blocks per Command: 128
666 Driver Queue Depth: 127, Maximum Scatter/Gather Segments: 33
667 Stripe Size: 64KB, Segment Size: 8KB, BIOS Geometry: 255/63
669 0:1 - Disk: Online, 2201600 blocks
670 0:2 - Disk: Online, 2201600 blocks
671 0:3 - Disk: Online, 2201600 blocks
672 1:1 - Disk: Online, 2201600 blocks
673 1:2 - Disk: Dead, 2201600 blocks
674 1:3 - Disk: Online, 2201600 blocks
676 /dev/rd/c0d0: RAID-5, Online, 4399104 blocks, Write Thru
677 /dev/rd/c0d1: RAID-6, Online, 2754560 blocks, Write Thru
678 Rebuild Completed Successfully
680 and /proc/rd/status indicates that everything is healthy once again:
682 gwynedd:/u/lnz# cat /proc/rd/status
685 Note that the absence of a viable standby drive does not create an "ALERT"
686 status. Once dead Physical Drive 1:2 has been replaced, the controller must be
687 told that this has occurred and that the newly replaced drive should become the
690 gwynedd:/u/lnz# echo "make-standby 1:2" > /proc/rd/c0/user_command
691 gwynedd:/u/lnz# cat /proc/rd/c0/user_command
692 Make Standby of Physical Drive 1:2 Succeeded
694 The echo command instructs the controller to make Physical Drive 1:2 into a
695 standby drive, and the status message that results from the operation is then
696 available for reading from /proc/rd/c0/user_command, as well as being logged to
697 the console by the driver. Within 60 seconds of this command the driver logs:
699 DAC960#0: Physical Drive 1:2 Error Log: Sense Key = 6, ASC = 29, ASCQ = 01
700 DAC960#0: Physical Drive 1:2 is now STANDBY
701 DAC960#0: Make Standby of Physical Drive 1:2 Succeeded
703 and /proc/rd/c0/current_status is updated:
705 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
708 0:1 - Disk: Online, 2201600 blocks
709 0:2 - Disk: Online, 2201600 blocks
710 0:3 - Disk: Online, 2201600 blocks
711 1:1 - Disk: Online, 2201600 blocks
712 1:2 - Disk: Standby, 2201600 blocks
713 1:3 - Disk: Online, 2201600 blocks
715 /dev/rd/c0d0: RAID-5, Online, 4399104 blocks, Write Thru
716 /dev/rd/c0d1: RAID-6, Online, 2754560 blocks, Write Thru
717 Rebuild Completed Successfully