Look for a raid card that can run in IT mode.
Maybe this will help?
https://serverfault.com/questions/843748/how-to-upgrade-dell-perc-h310-to-dell-perc-6i-in-dell-r420
Welcome to c/linux!
Welcome to our thriving Linux community! Whether you're a seasoned Linux enthusiast or just starting your journey, we're excited to have you here. Explore, learn, and collaborate with like-minded individuals who share a passion for open-source software and the endless possibilities it offers. Together, let's dive into the world of Linux and embrace the power of freedom, customization, and innovation. Enjoy your stay and feel free to join the vibrant discussions that await you!
Rules:
Stay on topic: Posts and discussions should be related to Linux, open source software, and related technologies.
Be respectful: Treat fellow community members with respect and courtesy.
Quality over quantity: Share informative and thought-provoking content.
No spam or self-promotion: Avoid excessive self-promotion or spamming.
No NSFW adult content
Follow general lemmy guidelines.
Look for a raid card that can run in IT mode.
Maybe this will help?
https://serverfault.com/questions/843748/how-to-upgrade-dell-perc-h310-to-dell-perc-6i-in-dell-r420
Is it necessary (or especially advantageous) to use a hardware RAID controller to create the RAID? I'm completely ignorant of those hardware aspects of servers, so was hoping to create a software RAID using ZFS.
Initiator-target (IT) mode enables creating a JBOD with zfs vdevs on it. You can have the zfs vdevs in raidz configuration (which would give you the same drive redundancy as a hardware raid, with raidz1 performing similar to RAID5)
zfs is commonly used with a JBOD configuration on a raid controller but you can also use any other kind of controller as long as the individual drives can be written to. examples for this would be NVMe drives directly attached to the PCIe bus or normal SATA controllers. This is more a performance optimization than an issue with compatibility.
Okay, I think that's basically what I'm trying to do, though I don't know if I already have a JBOD. My drives certainly do show up on my desktop as just a bunch of individuals drives, haha. How do I access the hardware controller to see how it is currently set up?
just a bunch of individuals drives
that is literally what JBOD means so congratulations you already have one. a classical RAID would show up as a single drive.
That's good to know. I've been experimenting with zpool with some success this afternoon. The one weird thing is that the disks in the SC220 have two device letters each, while the ones in the R420 have a single device letter in /dev/. Like so:
R240 Disk 1: sda Disk 2: sdb Disk 3: sdc Disk 4: sdd
SC220 Disk 5: sde and sdo Disk 6: sdf and sdp ... Disk 14: sdn and sdx
Any idea why a single disk is assigned two device letters, and whether it matters once I create the pool?
could you run something like sudo lsblk -o+MODEL
and note down the model for the drives? i kind of suspect that the HBA you are using is still doing some abstraction and is not in IT mode. the duplication could come from connecting two SAS cables to the same backplane, thus creating a sort of double image of the enclosure. this is usually handled and hidden by the HBA though if it is configured correctly.
pls also check that you are in fact using the correct ports on the enclosure. if you are not building a SAN only the "A" ports are supposed to be used and the "B" ports should be unused/free.
Very interesting and thanks for helping me with this. I do have both SAS cables plugged in. I double-checked the back of the SC220 and I'm definitely only using the "A" ports. The lsblk command you suggested is interesting. Here is the output for the drives with two device letters.
first set of device letters*
sde 8:64 0 931.5G 0 disk ST91000642SS
├─sde1 8:65 0 931.5G 0 part
└─sde9 8:73 0 8M 0 part
sdf 8:80 0 931.5G 0 disk ST91000642SS
├─sdf1 8:81 0 931.5G 0 part
└─sdf9 8:89 0 8M 0 part
sdg 8:96 0 838.4G 0 disk AL13SEB900
├─sdg1 8:97 0 838.4G 0 part
└─sdg9 8:105 0 8M 0 part
sdh 8:112 0 838.4G 0 disk AL13SEB900
├─sdh1 8:113 0 838.4G 0 part
└─sdh9 8:121 0 8M 0 part
sdi 8:128 0 931.5G 0 disk ST91000642SS
├─sdi1 8:129 0 931.5G 0 part
└─sdi9 8:137 0 8M 0 part
sdj 8:144 0 931.5G 0 disk ST91000642SS
├─sdj1 8:145 0 931.5G 0 part
└─sdj9 8:153 0 8M 0 part
sdk 8:160 0 931.5G 0 disk ST91000642SS
├─sdk1 8:161 0 931.5G 0 part
└─sdk9 8:169 0 8M 0 part
sdl 8:176 0 931.5G 0 disk ST91000642SS
├─sdl1 8:177 0 931.5G 0 part
└─sdl9 8:185 0 8M 0 part
sdm 8:192 0 838.4G 0 disk AL13SEB900
├─sdm1 8:193 0 838.4G 0 part
└─sdm9 8:201 0 8M 0 part
sdn 8:208 0 931.5G 0 disk ST91000642SS
├─sdn1 8:209 0 931.5G 0 part
└─sdn9 8:217 0 8M 0 part
***** second set of device letters*****
sdo 8:224 0 931.5G 0 disk ST91000642SS
├─sdo1 8:225 0 2G 0 part
└─sdo2 8:226 0 929.5G 0 part
sdp 8:240 0 931.5G 0 disk ST91000642SS
├─sdp1 8:241 0 2G 0 part
└─sdp2 8:242 0 929.5G 0 part
sdq 65:0 0 838.4G 0 disk AL13SEB900
├─sdq1 65:1 0 2G 0 part
└─sdq2 65:2 0 836.4G 0 part
sdr 65:16 0 838.4G 0 disk AL13SEB900
├─sdr1 65:17 0 2G 0 part
└─sdr2 65:18 0 836.4G 0 part
sds 65:32 0 931.5G 0 disk ST91000642SS
├─sds1 65:33 0 2G 0 part
└─sds2 65:34 0 929.5G 0 part
sdt 65:48 0 931.5G 0 disk ST91000642SS
├─sdt1 65:49 0 2G 0 part
└─sdt2 65:50 0 929.5G 0 part
sdu 65:64 0 931.5G 0 disk ST91000642SS
├─sdu1 65:65 0 2G 0 part
└─sdu2 65:66 0 929.5G 0 part
sdv 65:80 0 931.5G 0 disk ST91000642SS
├─sdv1 65:81 0 2G 0 part
└─sdv2 65:82 0 929.5G 0 part
sdw 65:96 0 838.4G 0 disk AL13SEB900
├─sdw1 65:97 0 2G 0 part
└─sdw2 65:98 0 836.4G 0 part
sdx 65:112 0 931.5G 0 disk ST91000642SS
├─sdx1 65:113 0 2G 0 part
└─sdx2 65:114 0 929.5G 0 part
The first set and the second set do show that they are assigned to the same device model, which makes sense since I can also see in the Gnome "Disks" app that each of these disks has two device letters (e.g. sde and sdo). However, the interesting thing I noticed in the output above is that the first set of device letters show the smaller partition as 8M in size and the second set of device letters show the smaller partition as 2G in size. I recall that when I first looked at the disks, before I started using zpool to experiment with creating pools, all of the drives in the SC220 had a 2G partition labeled "swap" (in the Gnome Disks app). After I created a zpool using devices sde-sdn, the devices in the zpool have a partition that is 8M in size. Now only the second set of devices (sdo-sdx) still have the 2G partition, which seems weird. Are there two partition tables?
datasheet for one of the drive models apparently these have a dual SAS interface, so what you are seing could be completely normal. i dont have any experience with this type of setup though.
btw you can uniquely identify partitions by using something like lsblk -o+PARTUUID,FSTYPE
the partuuid should never repeat in the output even if the partition table was somehow used as a template (though "dd"ing from disk to disk will duplicate those of course)
also check out the "SERIAL" column for lsblk to uniquely identify the drives themselves.
Thanks, you are incredibly helpful. The four HDDs in the R420 are SCSI drives, while the ones in the SC220 are SAS drives, and it is indeed the 10 SAS drives in the SC220 that have two device letters each. Wow, how about that. If this is the explanation for why there are two devices per drive, I see on the SAS wiki that it has something to do with: "SAS devices feature dual ports, allowing for redundant backplanes or multipath I/O; this feature is usually referred to as the dual-domain SAS."
That gives me plenty to go on for further online research. I was just getting crap search results before, but now I have a better idea what to search for. Thanks a lot!!
Nope - that's the whole point of ZFS - you don't need any special hardware, nor do you want that layer hiding the details since ZFS manages the drives. Plus, you probably want to use RAIDZx with spare drives to absorb failure.
Yes, thanks. After some experimenting, I did find the raidz1 setting and plan to use it for sure!