What you need is backup. RAID is not backup and is not for most home/personal users. I learnt that the hard way. Now my NAS use simple volumes only, after all, I really don't have many things I cannot lose on it. If it's something really important, I have multiple copies on different drives, and some offline cold backup. So now if any of my NAS drive is about to fail, I can just copy out the data and replace the drive, instead of spending weeks trying to rebuild the RAID and ended with a total loss as multiple drives failed in a row. The funny thing is that, after moving to simply volumes approach, I never had a drive with even a bad sector since.
Oh I have backups myself. But parent is more or less talking about a 71TiB NAS for residential usage and being able to ignore the bit rot; in that context such a person probably wouldn't have backup.
Personnaly I have long since moved out of raid 5/6 into raid 1 or 10 with versionned backup, at some level of data raid 5/6 just isn't cutting it anymore in case anything goes slightly wrong.
Yep, I get that. I was from there. My NAS is almost 10 years old now, and there are just above 60 TiB data on it, there is nothing I cannot really lose on it. I don't really have a reason to put a 20 bays NAS at home, so simple volumes turned out to be a better option. Repairing a RAID is no fun. I guess most of the ordinary home user like me should probably go with simple volumes. The cost and effort required for a RAID just doesn't justify the benefit for most home users.
It was ext4, and I’ve had it happen two different times - in fact, I’ve never had it happen in a ‘good’ recoverable way before that I’ve ever seen.
It triggered a kernel panic in every machine that I mounted it in, and it wasn’t a media issue either. Doing a block level read of the media had zero issues and consistently returned the exact same data the 10 times I did it.
Notably, I had the same thing happen using btrfs due to power issues on a Raspberry Pi (partially corrupted writes resulting in a completely unrecoverable filesystem, despite it being in 2x redundancy mode).
Should it be impossible? Yes. Did it definitely, 100% for sure happen? You bet.
I never actually lost data on ZFS, and I’ve done some terrible things to pools before that took quite awhile to unbork, including running it under heavy write load with a machine with known RAM problems and no ECC.
so I can consider myself very lucky and unlucky at the same time.
I had data corruption on zfs filesystem that destroyed whole pool to unrecoverable state (zfs was segfaulting while trying to import, all recovery zfs features where crashing zfs module and required reboot)
the lucky part is that this happened just after (something like next day) I migrated whole pool to another (bigger) server/pool so that system was already scheduled for full disk wipe
So not much but at the same time with a special kind of luck might have been on an encrypted archive ^^.