Hello.
I have a problem in my Synology. No problem with Plex, but I'm desperate and I think there are smart people who can help me solve it in the forum.
I have a DS2413+ with 4 drives (4TBx4) and I just add 8 drives (5TBx8) for expand capacity. SHR btrfs. Volume consistency checking was successful and the repair has been running for about 1.5 days. During the stage 'Repairing (Adding disk xx.xx%)' I opened a SSH session and issued 2 commands to try to speed up the repair:
echo 50000 > /proc/sys/dev/raid/speed_limit_min
echo 32768 > /sys/block/md4/md/stripe_cache_size
The repair continued fine after increasing the minimum speed limit from 10000 to 50000. However when I increased the stripe cache size from 1024 (default) to 32768 the SSH session go out, the web DSM interface was unresponsive and the NAS reboot. The DSM is working now but its reporting the volume is crashed.
I found some commands to solve a case like mine.
sfdisk -l
Error: /dev/md2: unrecognised disk label
get disk fail
Error: /dev/md3: unrecognised disk label
get disk fail
Error: /dev/md5: unrecognised disk label
get disk fail
Error: /dev/md6: unrecognised disk label
get disk fail
cd /etc/space
ll
Original 4 drives with data:
-rw-rw-rw- 1 root root 4577 Oct 21 21:13 space_history_20161021_211345.xml
Insert 8 new extra drives:
-rw-rw-rw- 1 root root 12902 Oct 21 21:53 space_history_20161021_215326.xml
Crash drive:
-rw-rw-rw- 1 root root 10274 Oct 23 11:18 space_history_20161023_111845.xml
Restart:
-rw-rw-rw- 1 root root 10274 Oct 23 13:37 space_history_20161023_133740.xml
Fail sdb8 drive:
-rw-rw-rw- 1 root root 9457 Oct 24 03:29 space_history_20161024_032915.xml
cat space_history_20161021_215326.xml
In this critical part I get confused.
mdadm -E /dev/sd[abcdefgh]7
mdadm -E /dev/sd[abcdefgh]7 > /root/sdABCDEFGH7superb.txt
synospace.sh -p /etc/space/space_history_20161021_215326.xml
mdadm -C /dev/md6 -R -l raid5 -n 8 --assume-clean missing missing missing missing missing missing missing missing
mdadm -C /dev/md2 -R -l raid5 -n 12 --assume-clean missing missing missing missing missing missing missing missing missing missing missing missing
mdadm -C /dev/md3 -R -l raid5 -n 12 --assume-clean missing missing missing missing missing missing missing missing missing missing missing missing
mdadm -C /dev/md4 -R -l raid5 -n 12 --assume-clean missing missing missing missing missing missing missing missing missing missing missing missing
mdadm -C /dev/md5 -R -l raid5 -n 12 --assume-clean missing missing missing missing missing missing missing missing missing missing missing missing
mdadm -C /dev/md4 -R -l raid5 -n 7 --assume-clean /dev/sda7 /dev/sdb7 /dev/sdc7 /dev/sdd7 /dev/sdh7 /dev/sdg7 /dev/sde7
So am I right in thinking it might be worth trying an mdadm force-assemble? I think I'm right in saying the outcome of this command won't cause any irreparable damage by itself?
raid path="/dev/md4" uuid="80b6fb38:60fc64ad:8267cc1d:750f50d4" level="raid5" version="1.2"
/dev/sda7:
Array UUID : 80b6fb38:60fc64ad:8267cc1d:750f50d4
Name : DiskStation:4 (local to host DiskStation)
I'm guessing, going by the XML the command would be:
mdadm -C /dev/md4 -R -l raid5 -n 7 --assume-clean /dev/sda7 /dev/sdb7 /dev/sdc7 /dev/sdd7 /dev/sdh7 /dev/sdg7 /dev/sde7
mdadm: /dev/sda7 appears to be part of a raid array:
level=raid5 devices=12 ctime=Sat May 21 04:00:36 2016
mdadm: /dev/sdb7 appears to be part of a raid array:
level=raid5 devices=12 ctime=Sat May 21 04:00:36 2016
mdadm: /dev/sdc7 appears to be part of a raid array:
level=raid5 devices=12 ctime=Sat May 21 04:00:36 2016
mdadm: /dev/sdd7 appears to be part of a raid array:
level=raid5 devices=12 ctime=Sat May 21 04:00:36 2016
mdadm: /dev/sdh7 appears to be part of a raid array:
level=raid5 devices=12 ctime=Sat May 21 04:00:36 2016
mdadm: /dev/sdg7 appears to be part of a raid array:
level=raid5 devices=12 ctime=Sat May 21 04:00:36 2016
mdadm: /dev/sde7 appears to be part of a raid array:
level=raid5 devices=12 ctime=Sat May 21 04:00:36 2016
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md4 started.
mdadm --detail /dev/md4
`/dev/md4:
Version : 1.2
Creation Time : Mon Oct 24 16:05:50 2016
Raid Level : raid5
Array Size : 5860457472 (5588.97 GiB 6001.11 GB)
Used Dev Size : 976742912 (931.49 GiB 1000.18 GB)
Raid Devices : 7
Total Devices : 7
Persistence : Superblock is persistent
Update Time : Mon Oct 24 16:05:50 2016
State : clean
Active Devices : 7
Working Devices : 7
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : DiskStation:4 (local to host DiskStation)
UUID : d8dc80f4:4f116ae7:535e5dff:79aea56d
Events : 0
Number Major Minor RaidDevice State
0 8 7 0 active sync /dev/sda7
1 8 23 1 active sync /dev/sdb7
2 8 39 2 active sync /dev/sdc7
3 8 55 3 active sync /dev/sdd7
4 8 119 4 active sync /dev/sdh7
5 8 103 5 active sync /dev/sdg7
6 8 71 6 active sync /dev/sde7`
pvs
PV VG Fmt Attr PSize PFree
/dev/md2 vg1000 lvm2 a-- 2.72t 0
/dev/md3 vg1000 lvm2 a-- 2.73t 0
/dev/md4 vg1000 lvm2 a-- 2.73t 0
/dev/md5 vg1000 lvm2 a-- 2.73t 0
/dev/md6 vg1000 lvm2 a-- 6.37t 0
In this critical part I get confused. What disk number should I choose?
PLEASE HELP. n6?
mdadm -CR -amd --assume-clean -n6 -e1.2 -l5 /dev/md4 /dev/sda7 /dev/sdb7 /dev/sdc7 /dev/sdd7 /dev/sdh7 /dev/sdg7
mdadm: /dev/sda7 appears to be part of a raid array:
level=raid5 devices=6 ctime=Mon Oct 24 16:39:26 2016
mdadm: /dev/sdb7 appears to be part of a raid array:
level=raid5 devices=6 ctime=Mon Oct 24 16:39:26 2016
mdadm: /dev/sdc7 appears to be part of a raid array:
level=raid5 devices=6 ctime=Mon Oct 24 16:39:26 2016
mdadm: /dev/sdd7 appears to be part of a raid array:
level=raid5 devices=6 ctime=Mon Oct 24 16:39:26 2016
mdadm: /dev/sdh7 appears to be part of a raid array:
level=raid5 devices=6 ctime=Mon Oct 24 16:39:26 2016
mdadm: /dev/sdg7 appears to be part of a raid array:
level=raid5 devices=6 ctime=Mon Oct 24 16:39:26 2016
mdadm: array /dev/md4 started.
pvs
PV VG Fmt Attr PSize PFree
/dev/md2 vg1000 lvm2 a-- 2.72t 0
/dev/md3 vg1000 lvm2 a-- 2.73t 0
/dev/md4 vg1000 lvm2 a-- 2.73t 0
/dev/md5 vg1000 lvm2 a-- 2.73t 0
/dev/md6 vg1000 lvm2 a-- 6.37t 0
lvm lvscan
inactive '/dev/vg1000/lv' [17.27 TiB] inherit
vgchange -ay
1 logical volume(s) in volume group "vg1000" now active
mount /dev/vg1000/lv /volume1
Trying to repair /dev/md4 mistakenly used the number 7 on /dev/md4 instead of using the number 12.
Wrong:
mdadm -C /dev/md4 -R -l raid5 -n 7 --assume-clean /dev/sda7 /dev/sdb7 /dev/sdc7 /dev/sdd7 /dev/sdh7 /dev/sdg7 /dev/sde7
Right:
mdadm -C /dev/md4 -R -l raid5 -n 12 --assume-clean /dev/sda7 /dev/sdb7 /dev/sdc7 /dev/sdd7 /dev/sdh7 /dev/sdg7 /dev/sde7
Wrong:
mdadm -CR -amd --assume-clean -n 6 -e1.2 -l5 /dev/md4 /dev/sda7 /dev/sdb7 /dev/sdc7 /dev/sdd7 /dev/sdh7 /dev/sdg7
Right?
mdadm -CR -amd --assume-clean -n 12 -e1.2 -l5 /dev/md4 /dev/sda7 /dev/sdb7 /dev/sdc7 /dev/sdd7 /dev/sdh7 /dev/sdg7
So now my Volume appears damaged.
raid path="/dev/md2" uuid="0ea2e18c:2819c5dc:07b96b51:cd208f63" > /dev/sda5
raid path="/dev/md3" uuid="4032df32:cee99c16:5ecf7189:04408674" > /dev/sda6
raid path="/dev/md4" uuid="80b6fb38:60fc64ad:8267cc1d:750f50d4" > /dev/sda7
raid path="/dev/md5" uuid="d3f9e63f:9fc77224:7223b3b6:34256bd6" > /dev/sda8
raid path="/dev/md6" uuid="3e3dc3b7:44855efa:ef8d1eeb:e2ac8a03" > /dev/sde9
raid path="/dev/md7" uuid="d8dc80f4:4f116ae7:535e5dff:79aea56d" > /dev/sde7
/dev/md2 vg1000 lvm2 a-- 2.72t 0
/dev/md3 vg1000 lvm2 a-- 2.73t 0
/dev/md4 vg1000 lvm2 a-- 2.73t 0
/dev/md5 vg1000 lvm2 a-- 2.73t 0
/dev/md6 vg1000 lvm2 a-- 6.37t 0
And the worst is that trying to repair my disk 8 failed, then my RAID5 volume is now degraded, but can still be rebuilt with the right help.
mdadm --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Tue Oct 25 02:59:40 2016
Raid Level : raid1
Array Size : 2097088 (2048.28 MiB 2147.42 MB)
Used Dev Size : 2097088 (2048.28 MiB 2147.42 MB)
Raid Devices : 12
Total Devices : 11
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Tue Oct 25 03:03:21 2016
State : active, degraded
Active Devices : 11
Working Devices : 11
Failed Devices : 0
Spare Devices : 0
UUID : 33c748d2:bbc7472b:3017a5a8:c86610be (local to host DiskStation)
Events : 0.19
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
2 8 34 2 active sync /dev/sdc2
3 8 50 3 active sync /dev/sdd2
4 8 66 4 active sync /dev/sde2
5 8 82 5 active sync /dev/sdf2
6 8 98 6 active sync /dev/sdg2
7 8 130 7 active sync /dev/sdi2
8 8 146 8 active sync /dev/sdj2
9 8 162 9 active sync /dev/sdk2
10 8 178 10 active sync /dev/sdl2
11 0 0 11 removed
mdadm --detail /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Wed May 18 18:16:26 2016
Raid Level : raid5
Array Size : 10691246016 (10195.97 GiB 10947.84 GB)
Used Dev Size : 971931456 (926.91 GiB 995.26 GB)
Raid Devices : 12
Total Devices : 11
Persistence : Superblock is persistent
Update Time : Tue Oct 25 03:01:24 2016
State : clean, degraded
Active Devices : 11
Working Devices : 11
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : DiskStation:2 (local to host DiskStation)
UUID : 0ea2e18c:2819c5dc:07b96b51:cd208f63
Events : 37291
Number Major Minor RaidDevice State
4 8 5 0 active sync /dev/sda5
5 8 21 1 active sync /dev/sdb5
6 8 37 2 active sync /dev/sdc5
7 8 53 3 active sync /dev/sdd5
15 8 181 4 active sync /dev/sdl5
14 8 165 5 active sync /dev/sdk5
13 8 149 6 active sync /dev/sdj5
12 8 133 7 active sync /dev/sdi5
8 0 0 8 removed
10 8 101 9 active sync /dev/sdg5
9 8 85 10 active sync /dev/sdf5
8 8 69 11 active sync /dev/sde5
mdadm --detail /dev/md3
/dev/md3:
Version : 1.2
Creation Time : Fri May 20 22:37:48 2016
Raid Level : raid5
Array Size : 10744172032 (10246.44 GiB 11002.03 GB)
Used Dev Size : 976742912 (931.49 GiB 1000.18 GB)
Raid Devices : 12
Total Devices : 11
Persistence : Superblock is persistent
Update Time : Tue Oct 25 03:00:21 2016
State : clean, degraded
Active Devices : 11
Working Devices : 11
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : DiskStation:3 (local to host DiskStation)
UUID : 4032df32:cee99c16:5ecf7189:04408674
Events : 15834
Number Major Minor RaidDevice State
4 8 54 0 active sync /dev/sdd6
6 8 6 1 active sync /dev/sda6
5 8 22 2 active sync /dev/sdb6
7 8 38 3 active sync /dev/sdc6
15 8 182 4 active sync /dev/sdl6
14 8 166 5 active sync /dev/sdk6
13 8 150 6 active sync /dev/sdj6
12 8 134 7 active sync /dev/sdi6
8 0 0 8 removed
10 8 102 9 active sync /dev/sdg6
9 8 86 10 active sync /dev/sdf6
8 8 70 11 active sync /dev/sde6
HERE'S THE PROBLEM! Raid Devices : 6 Total Devices : 5
mdadm --detail /dev/md4
/dev/md4:
Version : 1.2
Creation Time : Mon Oct 24 17:08:25 2016
Raid Level : raid5
Array Size : 4883714560 (4657.47 GiB 5000.92 GB)
Used Dev Size : 976742912 (931.49 GiB 1000.18 GB)
Raid Devices : 6
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Tue Oct 25 03:00:21 2016
State : clean, degraded
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : DiskStation:4 (local to host DiskStation)
UUID : 2d13abf1:788790a8:cbc1d111:c322fe27
Events : 28
Number Major Minor RaidDevice State
0 8 7 0 active sync /dev/sda7
1 8 23 1 active sync /dev/sdb7
2 8 39 2 active sync /dev/sdc7
3 8 55 3 active sync /dev/sdd7
4 0 0 4 removed
5 8 103 5 active sync /dev/sdg7
mdadm --detail /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Jul 10 07:41:06 2016
Raid Level : raid5
Array Size : 10744172032 (10246.44 GiB 11002.03 GB)
Used Dev Size : 976742912 (931.49 GiB 1000.18 GB)
Raid Devices : 12
Total Devices : 11
Persistence : Superblock is persistent
Update Time : Tue Oct 25 03:00:20 2016
State : clean, degraded
Active Devices : 11
Working Devices : 11
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : DiskStation:5 (local to host DiskStation)
UUID : d3f9e63f:9fc77224:7223b3b6:34256bd6
Events : 9814
Number Major Minor RaidDevice State
0 8 24 0 active sync /dev/sdb8
1 8 8 1 active sync /dev/sda8
2 8 40 2 active sync /dev/sdc8
3 8 56 3 active sync /dev/sdd8
11 8 184 4 active sync /dev/sdl8
10 8 168 5 active sync /dev/sdk8
9 8 152 6 active sync /dev/sdj8
8 8 136 7 active sync /dev/sdi8
8 0 0 8 removed
6 8 104 9 active sync /dev/sdg8
5 8 88 10 active sync /dev/sdf8
4 8 72 11 active sync /dev/sde8
AND PERHAPS THERE IS A PROBLEM, BUT HERE I'M NOT SURE!
mdadm --detail /dev/md6
/dev/md6:
Version : 1.2
Creation Time : Fri Oct 21 21:52:52 2016
Raid Level : raid5
Array Size : 6837200384 (6520.46 GiB 7001.29 GB)
Used Dev Size : 976742912 (931.49 GiB 1000.18 GB)
Raid Devices : 8
Total Devices : 7
Persistence : Superblock is persistent
Update Time : Tue Oct 25 03:00:20 2016
State : clean, degraded
Active Devices : 7
Working Devices : 7
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : DiskStation:6 (local to host DiskStation)
UUID : 3e3dc3b7:44855efa:ef8d1eeb:e2ac8a03
Events : 253
Number Major Minor RaidDevice State
0 8 73 0 active sync /dev/sde9
1 8 89 1 active sync /dev/sdf9
2 8 105 2 active sync /dev/sdg9
3 0 0 3 removed
4 8 137 4 active sync /dev/sdi9
5 8 153 5 active sync /dev/sdj9
6 8 169 6 active sync /dev/sdk9
7 8 185 7 active sync /dev/sdl9
AND PERHAPS THERE IS A PROBLEM, BUT HERE I'M NOT SURE!
mdadm --detail /dev/md7
/dev/md7:
Version : 1.2
Creation Time : Mon Oct 24 16:05:50 2016
Raid Level : raid5
Array Size : 5860457472 (5588.97 GiB 6001.11 GB)
Used Dev Size : 976742912 (931.49 GiB 1000.18 GB)
Raid Devices : 7
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Tue Oct 25 03:00:20 2016
State : clean, FAILED
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : DiskStation:4 (local to host DiskStation)
UUID : d8dc80f4:4f116ae7:535e5dff:79aea56d
Events : 30
Number Major Minor RaidDevice State
0 0 0 0 removed
1 0 0 1 removed
2 0 0 2 removed
3 0 0 3 removed
4 0 0 4 removed
5 0 0 5 removed
6 8 71 6 active sync /dev/sde7
Please help to restore the correct values. The service center does not respond Synology and I'm desperate.