Hi All,
Most important part first: I’ve lost access to my main **DATA**
array after trying to create a new, unrelated, array using (it appears) malfunctioning SSDs. The Webmin Linux Raid page might be able to save my **DATA**
array, I hope. In a nutshell: Can I use Webmin to get sda
, sdb
, sdc
and sdd
back and hooked into a new /dev/md/*x*
mount-point somehow?
Background:
My **DATA**
array is composed of four 4TB spinners currently showing up like this:
sda 3.6T linux_raid_member disk └─md0 7.3T raid10 └─md0p1 7.3T ext4 part sdb 3.6T linux_raid_member disk └─md0 7.3T raid10 └─md0p1 7.3T ext4 part sdc 3.6T linux_raid_member disk └─md0 7.3T raid10 └─md0p1 7.3T ext4 part sdd 3.6T linux_raid_member disk └─md0 7.3T raid10 └─md0p1 7.3T ext4 part
The array was originally created with the following mdadm command:
$ sudo mdadm --create --verbose /dev/md0 --level=10 --layout=f2 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
Yesterday, after a few hours trying spent trying to create a new drive comprised of 3 partitions with a RAID1 array for each partition using two SSDs, meant to function as a “portable” TimeShift device to backup three of my LAN computers, I came to the realization that one or both of the SSDs were seriously corrupted/broken/kaput.
ROOT OF MY PROBLEM: At this point, I tried to “diagnose” these two SSDs by throwing a bunch of commands at them, using Disks and GParted - changing filesystem types, adding /removing flags, creating new UUIDs, and finally, I tried to wipe and format them. But nothing worked; they’re both in the garbage now.
Somehow, during all of these changes, it appears my **DATA**
array entry (i.e., the typical /dev/md0 entry), no longer appeared as it was supposed to: there was now some empty space, a large, empty partition, and some more dead space. And this is where I stand right now.
Running cat /proc/mdstat
gives me the following:
Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] md0 : active raid10 sdc[2] sdb[1] sdd[3] sda[0] 7813771264 blocks super 1.2 512K chunks 2 far-copies [4/4] [UUUU] bitmap: 0/59 pages [0KB], 65536KB chunk unused devices: <none>
There is some information not present there which I seem to recall seeing in the past? My /etc/mdadm/mdadm.conf entry:
ARRAY /dev/md/0 metadata=1.2 name=nas:0 UUID=dfb605b0-6029-4227-9f3a-622ca38f3606
…is no longer valid as the md0
entry shows up now as noted above, i.e., some blank space, a large partition, and some more blank space (it WAS a single contiguous partition before yesterday). And the large partition no longer has the UUID as above or in fstab
:
UUID=dfb605b0-6029-4227-9f3a-622ca38f3606 /media/nas ext4 rw,suid,dev,exec,auto,user,group,async,nofail,discard 0 0
I THINK it is possible to fix my array by “re-attaching” the four 4Tb spinners (sda
, sdb
, sdc
and sdd
) to a new/fixed “md0
” but I have no idea, and Webmin’s Linux Raid page appears to offer me some hope. This is absolutely critical as **DATA**
is my ONLY copy of all my files and the last 3 years of my life, work, documents, images, music, i. e., everything.
So I’m very open to options right now, please and thank you.
Merci:)
Shawn
SYSTEM INFORMATION | |
---|---|
OS type and version | Ub22.04 |
Virtualmin version | 7.7 |