Volume NON ATTIVO sul Raid di un TS-879 PRO

Discussioni sulle funzioni di RAID0/ RAID1/ Q-RAID1/RAID5/ dei NAS.
Rispondi
cmani
Messaggi: 15
Iscritto il: 19 lug 2015, 17:46

Volume NON ATTIVO sul Raid di un TS-879 PRO

Messaggio da cmani »

Salve,
ho un problema con un TS-879 PRO firmware aggiornato al 4.3.4.0486(20180215).
Il nas aveva un raid (la procedura Gestione di Storage/Snapshot indica raid 5, ma un suo gemello presente in ufficio ha il raid 6, e sincerimente immagino che fossero stati configurati in modo uguale)
Il nas è rimasto scollegato per un periodo.
Dopo averlo riconnesso l' interfaccia web indicava (Storage/Snapshot) il volume NON ATTIVO, e indicava che il disco N.1 non era presente, anche se in realtà c'era, quindi lo abbiamo considerato malfunzionante.
Lo abbiamo sostituito e riavviando il nas.
Ma l' unico cambiamento che abbiamo ottenuto è stato che ora il disco N.1 è tornato col punto verde e viene considerato presente e OK.
Il volume rimane sempre NON ATTIVO, ho provato a lanciare la procedura Gestione/Recupera ( da Storage/Snapshot) ma mi risponde:
[RAID5 Disk Volume Host Drive: 1 2 3 4 5 6 7 8] RAID Recovery failed.
Vi invio l'output di alcuni comandi che sono riuscito a dare via ssh, che spero vi aiutino a comprendere la situazione

[~] # df -h
Filesystem Size Used Available Use% Mounted on
none 258.0M 238.7M 19.3M 93% /
devtmpfs 897.2M 8.0K 897.2M 0% /dev
tmpfs 64.0M 536.0K 63.5M 1% /tmp
tmpfs 907.8M 0 907.8M 0% /dev/shm
tmpfs 16.0M 0 16.0M 0% /share
tmpfs 16.0M 0 16.0M 0% /mnt/snapshot/export
/dev/md9 493.5M 119.6M 373.8M 24% /mnt/HDA_ROOT
cgroup_root 907.8M 0 907.8M 0% /sys/fs/cgroup
/dev/md13 355.0M 340.8M 14.2M 96% /mnt/ext
/dev/ram2 433.9M 2.3M 431.6M 1% /mnt/update
tmpfs 64.0M 2.3M 61.7M 4% /samba
tmpfs 16.0M 60.0K 15.9M 0% /samba/.samba/lock/msg.lock
tmpfs 16.0M 0 16.0M 0% /mnt/ext/opt/samba/private/msg.sock
tmpfs 1.0M 0 1.0M 0% /mnt/rf/nd
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md256 : active raid1 sda2[7](S) sdb2[6](S) sdd2[5](S) sdc2[4](S) sdf2[3](S) sde2[2](S) sdh2[1] sdg2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sda4[0] sdb4[2] sdd4[3] sdc4[4] sdf4[5] sde4[6] sdg4[7] sdh4[1]
458880 blocks [8/8] [UUUUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sdg1[0] sda1[7] sdb1[6] sdd1[5] sdc1[4] sdf1[3] sde1[2] sdh1[1]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 3/65 pages [12KB], 4KB chunk

unused devices: <none>
[~] # cat /etc/mdadm.conf
cat: /etc/mdadm.conf: No such file or directory
[~] # mount
none on /new_root type tmpfs (rw,mode=0755,size=264192k)
/proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=64M)
tmpfs on /dev/shm type tmpfs (rw)
tmpfs on /share type tmpfs (rw,size=16M)
tmpfs on /mnt/snapshot/export type tmpfs (rw,size=16M)
/dev/md9 on /mnt/HDA_ROOT type ext4 (rw,data=ordered,barrier=1,nodelalloc)
cgroup_root on /sys/fs/cgroup type tmpfs (rw)
none on /sys/fs/cgroup/memory type cgroup (rw,memory)
/dev/md13 on /mnt/ext type ext4 (rw,data=ordered,barrier=1,nodelalloc)
/dev/ram2 on /mnt/update type ext2 (rw)
tmpfs on /samba type tmpfs (rw,size=64M)
tmpfs on /samba/.samba/lock/msg.lock type tmpfs (rw,size=16M)
tmpfs on /mnt/ext/opt/samba/private/msg.sock type tmpfs (rw,size=16M)
tmpfs on /mnt/rf/nd type tmpfs (rw,size=1m)
[~] # mdadm --query /dev/md9
/dev/md9: 517.71MiB raid1 8 devices, 0 spares. Use mdadm --detail for more detail.
[~] # mdadm --query --detail /dev/md9
/dev/md9:
Version : 0.90
Creation Time : Thu Nov 29 17:27:21 2012
Raid Level : raid1
Array Size : 530048 (517.71 MiB 542.77 MB)
Used Dev Size : 530048 (517.71 MiB 542.77 MB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 9
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Mar 4 16:25:27 2018
State : active
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0

UUID : 9a3cd0a2:3ba3d808:ac27ec3c:e48433de
Events : 0.486615

Number Major Minor RaidDevice State
0 8 97 0 active sync /dev/sdg1
1 8 113 1 active sync /dev/sdh1
2 8 65 2 active sync /dev/sde1
3 8 81 3 active sync /dev/sdf1
4 8 33 4 active sync /dev/sdc1
5 8 49 5 active sync /dev/sdd1
6 8 17 6 active sync /dev/sdb1
7 8 1 7 active sync /dev/sda1
[~] # mdadm --query --detail /dev/md256
/dev/md256:
Version : 1.0
Creation Time : Sat Mar 3 20:17:44 2018
Raid Level : raid1
Array Size : 530112 (517.77 MiB 542.83 MB)
Used Dev Size : 530112 (517.77 MiB 542.83 MB)
Raid Devices : 2
Total Devices : 8
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sat Mar 3 20:17:45 2018
State : clean
Active Devices : 2
Working Devices : 8
Failed Devices : 0
Spare Devices : 6

Name : 256
UUID : 9ab70d00:07012495:baebde97:01767ee4
Events : 2

Number Major Minor RaidDevice State
0 8 98 0 active sync /dev/sdg2
1 8 114 1 active sync /dev/sdh2

2 8 66 - spare /dev/sde2
3 8 82 - spare /dev/sdf2
4 8 34 - spare /dev/sdc2
5 8 50 - spare /dev/sdd2
6 8 18 - spare /dev/sdb2
7 8 2 - spare /dev/sda2
[~] # mdadm --query --detail /dev/md13
/dev/md13:
Version : 0.90
Creation Time : Thu Nov 29 17:27:29 2012
Raid Level : raid1
Array Size : 458880 (448.20 MiB 469.89 MB)
Used Dev Size : 458880 (448.20 MiB 469.89 MB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 13
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Mar 4 12:17:57 2018
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0

UUID : df70621f:90d5514f:2670920e:df98b785
Events : 0.7917

Number Major Minor RaidDevice State
0 8 4 0 active sync /dev/sda4
1 8 116 1 active sync /dev/sdh4
2 8 20 2 active sync /dev/sdb4
3 8 52 3 active sync /dev/sdd4
4 8 36 4 active sync /dev/sdc4
5 8 84 5 active sync /dev/sdf4
6 8 68 6 active sync /dev/sde4
7 8 100 7 active sync /dev/sdg4
[~] # mdadm -D --scan
ARRAY /dev/md9 metadata=0.90 UUID=9a3cd0a2:3ba3d808:ac27ec3c:e48433de
ARRAY /dev/md13 metadata=0.90 UUID=df70621f:90d5514f:2670920e:df98b785
ARRAY /dev/md256 metadata=1.0 spares=6 name=256 UUID=9ab70d00:07012495:baebde97:01767ee4
[~] # ./hdsentinel-017-x64
Hard Disk Sentinel for LINUX console 0.17x64.8556 (c) 2017 info@hdsentinel.com
Start with -r [reportfile] to save data to report, -h for help

Examining hard disk configuration ...

HDD Device 0: /dev/sda
HDD Model ID : ST3000DM001-9YN166
HDD Serial No: W1F15TW9
HDD Revision : CC4B
HDD Size : 2861588 MB
Interface : S-ATA Gen3, 6 Gbps
Temperature : 34 °C
Highest Temp.: 45 °C
Health : 100 %
Performance : 100 %
Power on time: 1162 days, 6 hours
Est. lifetime: more than 662 days
The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
No actions needed.

HDD Device 1: /dev/sdb
HDD Model ID : WDC WD30EFRX-68EUZN0
HDD Serial No: WD-WCC4N6YX95HL
HDD Revision : 82.00A82
HDD Size : 2861588 MB
Interface : S-ATA Gen3, 6 Gbps
Temperature : 32 °C
Highest Temp.: 41 °C
Health : 100 %
Performance : 100 %
Power on time: 223 days, 3 hours
Est. lifetime: more than 1000 days
The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
No actions needed.

HDD Device 2: /dev/sdc
HDD Model ID : ST3000DM001-1ER166
HDD Serial No: Z50031D3
HDD Revision : CC43
HDD Size : 2861588 MB
Interface : S-ATA Gen3, 6 Gbps
Temperature : 33 °C
Highest Temp.: 42 °C
Health : 100 %
Performance : 100 %
Power on time: 450 days, 2 hours
Est. lifetime: more than 1000 days
The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
No actions needed.

HDD Device 3: /dev/sdd
HDD Model ID : WDC WD30EFRX-68EUZN0
HDD Serial No: WD-WCC4N2LY7PLP
HDD Revision : 82.00A82
HDD Size : 2861588 MB
Interface : S-ATA Gen3, 6 Gbps
Temperature : 32 °C
Highest Temp.: 40 °C
Health : 100 %
Performance : 100 %
Power on time: 32 days, 10 hours
Est. lifetime: more than 1000 days
The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
No actions needed.

HDD Device 4: /dev/sde
HDD Model ID : ST3000DM001-1ER166
HDD Serial No: Z5004545
HDD Revision : CC43
HDD Size : 2861588 MB
Interface : S-ATA Gen3, 6 Gbps
Temperature : 33 °C
Highest Temp.: 42 °C
Health : 100 %
Performance : 100 %
Power on time: 481 days, 2 hours
Est. lifetime: more than 1000 days
The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
No actions needed.

HDD Device 5: /dev/sdf
HDD Model ID : ST3000DM001-1ER166
HDD Serial No: Z5006AWL
HDD Revision : CC43
HDD Size : 2861588 MB
Interface : S-ATA Gen3, 6 Gbps
Temperature : 33 °C
Highest Temp.: 42 °C
Health : 100 %
Performance : 100 %
Power on time: 469 days, 4 hours
Est. lifetime: more than 1000 days
The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
No actions needed.

HDD Device 6: /dev/sdg
HDD Model ID : WDC WD30EFRX-68N32N0
HDD Serial No: WD-WCC7K7HPY1A8
HDD Revision : 82.00A82
HDD Size : 2861588 MB
Interface : S-ATA Gen3, 6 Gbps
Temperature : 32 °C
Highest Temp.: 34 °C
Health : 100 %
Performance : 100 %
Power on time: 1 days, 19 hours
Est. lifetime: more than 1000 days
The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
No actions needed.

HDD Device 7: /dev/sdh
HDD Model ID : WDC WD30EFRX-68EUZN0
HDD Serial No: WD-WCC4N2SPFN7S
HDD Revision : 82.00A82
HDD Size : 2861588 MB
Interface : S-ATA Gen3, 6 Gbps
Temperature : 31 °C
Highest Temp.: 36 °C
Health : 100 %
Performance : 100 %
Power on time: 31 days, 17 hours
Est. lifetime: more than 1000 days
The hard disk status is PERFECT. Problematic or weak sectors were not found and there are no spin up or data transfer errors.
No actions needed.

HDD Device 8: /dev/sdi
HDD Model ID : USB DISK MODULE
HDD Serial No: PMAP1234
HDD Revision : PMAP
HDD Size : 491 MB
Interface : SCSI
Temperature : Unknown °C
Highest Temp.: Unknown °C
Health : Unknown %
Performance : Unknown %
Power on time:
Est. lifetime:


[~] #

Resto naturalmente a disposizione se avete la necessità di altre informazioni.

I dati presenti sul nas purtroppo non sono stati salvati, ed è della massima importanza poterli recuperare sia ricostruendo il volume o anche appoggiandoci ad un disco esterno USB.
Vi ringrazio anticipatamente.
Claudio
FFFAB
Messaggi: 4475
Iscritto il: 06 gen 2016, 19:44

Re: Volume NON ATTIVO sul Raid di un TS-879 PRO

Messaggio da FFFAB »

Ciao,
queste sono strade percorribili:
http://qnapsupport.net/how-to-fix-not-a ... -firmware/
Però, se non sei sicuro, apri un ticket a Qnap (quasi certamente dovrai reinserire il disco 1 originario).

Ps: Montare dei Seagate DM001 in un nas raid è (e lo era anche allora) un suicidio: spero che non siate stati consigliati da un professionista, perché altrimenti è meglio cambiarlo (il professionista intendo, non il nas) prima possibile....
: Walkman : TS-453A 8Gb Kingston HyperX Impact CL9 (Seagate ST10000NM0086 + ST10000NE0008 : Thumbup : )
Avatar utente
PiaNi
Messaggi: 4305
Iscritto il: 25 ott 2011, 22:39
Località: Bari

Re: Volume NON ATTIVO sul Raid di un TS-879 PRO

Messaggio da PiaNi »

Nas o non Nas i DM001 soprattuto nel taglio 3TB sono i dischi meno affidabili del periodo.
Basta vedere i prezzi del DM001 sul ebay....
Gigabyte B550M Aorus Pro-P | Ryzen 5700x + Scythe Fuma 2 | nVidia RTX 3060 12GB | NVMe WDBlack 500GB + WD HC530 14TB
TS 473A: 3×WD60EFRX Raid5
TS 133: WD HC530 14TB
TS 453A: 3×WD84PURZ Raid5
TS 451+: WD40EFAX (SMR!!)
Synology 1520+: 4×WD4003FRYZ Raid5
cmani
Messaggi: 15
Iscritto il: 19 lug 2015, 17:46

Re: Volume NON ATTIVO sul Raid di un TS-879 PRO

Messaggio da cmani »

Grazie, per le risposte, i dischi me li sono trovati quando il cliente mi ha chiamato, gli consiglierò di cambiarli al piu presto.
Avatar utente
PiaNi
Messaggi: 4305
Iscritto il: 25 ott 2011, 22:39
Località: Bari

Re: Volume NON ATTIVO sul Raid di un TS-879 PRO

Messaggio da PiaNi »

Ti ho avvisato perchè ci sono passato.
Gigabyte B550M Aorus Pro-P | Ryzen 5700x + Scythe Fuma 2 | nVidia RTX 3060 12GB | NVMe WDBlack 500GB + WD HC530 14TB
TS 473A: 3×WD60EFRX Raid5
TS 133: WD HC530 14TB
TS 453A: 3×WD84PURZ Raid5
TS 451+: WD40EFAX (SMR!!)
Synology 1520+: 4×WD4003FRYZ Raid5
Rispondi