From 5c123e20b4e588a273b9cb16c24c3aac8cdb91b3 Mon Sep 17 00:00:00 2001 From: NeilBrown Date: Wed, 24 Nov 2010 16:39:46 +1100 Subject: [PATCH] md/raid1: really fix recovery looping when single good device fails. MIME-Version: 1.0 Content-Type: text/plain; charset=utf8 Content-Transfer-Encoding: 8bit commit 8f9e0ee38f75d4740daa9e42c8af628d33d19a02 upstream. Commit 4044ba58dd15cb01797c4fd034f39ef4a75f7cc3 supposedly fixed a problem where if a raid1 with just one good device gets a read-error during recovery, the recovery would abort and immediately restart in an infinite loop. However it depended on raid1_remove_disk removing the spare device from the array. But that does not happen in this case. So add a test so that in the 'recovery_disabled' case, the device will be removed. This suitable for any kernel since 2.6.29 which is when recovery_disabled was introduced. Reported-by: Sebastian Färber Signed-off-by: NeilBrown Signed-off-by: Greg Kroah-Hartman --- drivers/md/raid1.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 0b830bbe1d8..d8b2d7b0c3b 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1210,6 +1210,7 @@ static int raid1_remove_disk(mddev_t *mddev, int number) * is not possible. */ if (!test_bit(Faulty, &rdev->flags) && + !mddev->recovery_disabled && mddev->degraded < conf->raid_disks) { err = -EBUSY; goto abort; -- 2.11.4.GIT