this post was submitted on 05 Apr 2024
71 points (97.3% liked)
Asklemmy
43858 readers
1516 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- [email protected]: a community for finding communities
~Icon~ ~by~ ~@Double_[email protected]~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'd say "old" RAID could be dead if you have proper backups and have the ability to replace a defect drive fast in the case uptime is crucial. But there's also modern RAID like btrfs and zfs that also can repair corrupted filed, caused by bitrot for example. Old RAID can't do that also hardware based RAID couldn't either when I used it until years ago. Maybe that changed but I don't see the point of hardware based RAID in most cases anymore
AFAIK only officially supported RAID modes in BTRFS are RAID0 and RAID1.
RAID56 is officially considered unstable.
Raid56 is a risky one in more filesystem than just btrfd though, but if you have a ups as backup, you should be fine.
UPS won't protect from Kernel Panic, sadly
True
What about dm-raid? Is it still risky? I guess so, because it's separate devices. So any software raid with 5-6 would be problematic?