63

I'm sketching the idea of building a NAS in my home, using a USB RAID enclosure (which may eventually turn into a proper NAS enclosure).

I haven't got the enclosure yet but that's not that big of a deal, right now I'm thinking whether to buy HDDs for the storage (currently have none) to setup RAID, but I cannot find good deals on HDDs.

I found on reddit that people were buying high capacity drives for as low as $15/TB, e.g. paying $100 for 10/12TB drives, but nowadays it's just impossible to find drives at a bargain price, thanks to AI datacenters, I guess.

In Europe I've heard of datablocks.dev where you can buy white-label or recertified Seagate disks, sometimes you can find refurbished drives in eBay, but I can't find these bargain deals everyone seemed to have up until last year?

For example, is 134 EUR for a 6TB refurbished Toshiba HDD a good price, considering the price hikes? What price per TB should I be looking for to consider the drives cheap? Where else can I search for these cheap drives?

top 35 comments
sorted by: hot top new old
[-] kugmo@sh.itjust.works 75 points 1 week ago

but nowadays it's just impossible to find drives at a bargain price, thanks to AI datacenters, I guess.

You answered your own question.

[-] WhyJiffie@sh.itjust.works 0 points 1 week ago

that's dumb. there are still better and worse prices

[-] NewNewAugustEast@lemmy.zip 2 points 1 week ago
[-] WhyJiffie@sh.itjust.works 1 points 6 days ago

that mathematically does not stand.

if a drive costs 1000 and another 2000, it cannot be that all is worse than each other!

[-] Quacksalber@sh.itjust.works 23 points 1 week ago

I'd actually caution against buying suspiciously cheap drives. There has been an epidemic of scammers selling (heavily) used drives as new.

https://www.heise.de/en/news/Fraud-with-Seagate-hard-disks-Dealers-swap-Seagate-investigates-10274864.html

[-] bridgeenjoyer@sh.itjust.works 21 points 1 week ago

My hard drives have more than doubled in cost in 6 months.

Fuck data centers.

[-] CmdrShepard49@sh.itjust.works 3 points 1 week ago

I checked my receipts and the used 14TB WD Ultrastars I've been buying from ServerPartDeals are about $100 more per drive than when I bought them last year. My number was around $12/TB for those and all the shucked WD Elements drives I'd been buying in the years prior.

[-] tiramichu@sh.itjust.works 15 points 1 week ago* (last edited 1 week ago)

There's nowhere convenient. As you correctly identified, AI has pushed the price of drives through the roof.

Your only real chance is to find a one-off on auction sites from someone who hasn't noticed what's going on or what the current market is asking for drives.

You might still be able to find bargains in charity stuff or on Marketplace sites etc but these are unlikely to be sufficient capacity for NAS builds unless you get super lucky.

[-] Humanius@lemmy.world 8 points 1 week ago* (last edited 1 week ago)

Prices of HDDs have increased in recent months due to the AI bubble

Here in the NL we have a website called Tweakers for comparing hardware prices. They only really list webstores that sell to the Netherlands, but it could help give you a decent indication of normal prices at the moment.

If I sort by price / TB, this refurbished 6TB Seagate SAS-drive for €122 seems to be one of the best deals I can find:
https://www.redshell.nl/seagate-enterprise-capacity-35-hdd-interne-harde-schijf-6-tb-7200-rpm-128-mb-35-sas/

Given that price, €134 for a refurbished 6TB Toshiba seems like a pretty decent deal. Though I would like to add that my experience with Toshibas is that they are quite loud compared to Seagate and Western Digital. So if noise is a concern it might be worth looking for those instead.

[-] SpikesOtherDog@ani.social 7 points 1 week ago

I have been toying with the idea of using USB storage, but my concern is that the controllers are not meant to be used that heavily. Supposedly SATA controllers are also not built for the abuse I have been throwing them in my machines, and I don't want to push it.

[-] WhyJiffie@sh.itjust.works 1 points 1 week ago

Supposedly SATA controllers are also not built for the abuse I have been throwing them in my machines, and I don't want to push it.

what makes you say that?

[-] SpikesOtherDog@ani.social 3 points 1 week ago* (last edited 1 week ago)

I just read that recently. Let me see if I can run that source back down.

Edit: All in one CompTIA server plus certification exam guide second edition exam SK0-005 McGraw-Hill Daniel LaChance 2021 Page 138. In the table there it says that SATA is not designing for constant use.

Edit 2:

https://www.hp.com/us-en/shop/tech-takes/sas-vs-sata

Reliability:

SAS: Designed for 24/7 operation with higher >mean time between failures (MTBF), often 1.6 million hours or more
SATA: Suitable for regular use but not as robust as SAS for constant, heavy workloads, with MTBF typically around 1.2 million hour

They are saying that SAS is a better option with a longer MTBF, but I don't expect my drives to last 5 years, much less 136.

My own two cents here is that you probably don't want to use SATA ZFS JBOD in an enterprise environment, but that's more based on enterprise lifecycle management than utility.

[-] WhyJiffie@sh.itjust.works 1 points 6 days ago

thanks! as you say because tye 5 vs 136 years it does not really matter in our environment, but it probably starts mattering when you have lots of disks.

I don't actually know if this is the right way to calculate it, but if for each disk you count the time separately, and add it together for a combined MTBF, then that is 20 out of the 136 MTBF years.
But with 30 drives that will be 150 and indicate that you will likely have at least one error of some kind because of using SATA

[-] SpikesOtherDog@ani.social 1 points 6 days ago

Hey, I'm not sure where you got your factor of 5 years, but it was a number I pulled out my ass. I'm a repair depot I typically didn't see drives that live much longer than 17k hours (just under 2 years). That didn't mean that they always fall at that age, only that systems that came through had about that much time on them max.

Regarding the 136 vs 150 million numbers, those numbers are pure bullshit. MTBF is a raw calculation of how long it will take these devices to fall based on operational runtime over how many failures were experienced in the field. They most likely applied a small number of warranty failures over a massive number of manufacturing runs and projected that it would take that long for about half their drives to fall.

In reality, you will see failure spikes in the lifetime of a product. The initial failures will spike and drop off. I recall reading either the data surrounding this article or something similar when they realized that the bathtub curve may not be the full picture. They just updated it again for numbers from up to last year and you can see that it would be difficult to project an average lifetime of 20 years, much less 150.

My last thought on this is that when Backblaze mentions consumer vs enterprise drives they are possibly discussing SATA vs SAS. This comes from the realization that enterprise workstation drives are still just consumer drives with a part number label on them (seen in Dell and HP Enterprise equipment). Now, they could be referring to more expensive SATA drives, but I can't imagine that they are using anything but SAS at this point in their lifecycle.

[-] WhyJiffie@sh.itjust.works 1 points 5 days ago

I'm a repair depot I typically didn't see drives that live much longer than 17k hours (just under 2 years).

I have a bunch of working drives with 2+ years, and in my area almost everyone still has their system installed on old hard drives

that it would be difficult to project an average lifetime of 20 years

I did not mean an average timeline of 20 years

that when Backblaze mentions consumer vs enterprise drives they are possibly discussing SATA vs SAS.

there are plenty of enterprise SATA drives

This comes from the realization that enterprise workstation drives are still just consumer drives with a part number label on them (seen in Dell and HP Enterprise equipment).

that's workstation drives. Obviously if your work buys 2 TB wd blue drives they won't become enterprise drives. enterprise drives include like that of wd red pro, ultrastars, etc, which do use the SATA interface.

[-] SpikesOtherDog@ani.social 1 points 4 days ago

I have a bunch of working drives with 2+ years, and in my area almost everyone still has their system installed on old hard drives

Yeah. I was tempering that statement with the fact that I was getting computers for repair, often with bad drives, that had 2 years of use. Now that I really think about it, we were seeing them up to about 5 years. I recall that we were discussing whether to proactively replace the drives with that much time on there. At the time I wanted to ship them back out, and others were saying that 5 years was end of life. Our job was just to get them running again vs. performing full repairs.

I did not mean an average timeline of 20 years

Then I was not sure what you meant by this:

I don’t actually know if this is the right way to calculate it, but if for each disk you count the time separately, and add it together for a combined MTBF, then that is 20 out of the 136 MTBF years.

there are plenty of enterprise SATA drives

..

that’s workstation drives. Obviously if your work buys 2 TB wd blue drives they won’t become enterprise drives. enterprise drives include like that of wd red pro, ultrastars, etc, which do use the SATA interface.

Those weren't really on my radar, TBH. I took a look at the Ultrastar spec sheet and have to concede that the drive interface itself doesn't seem to affect the lifecycle of the drive itself. I do have to say that the spec sheet does say at the bottom: "MTBF and AFR specifications are based on a sample population and are estimated by statistical measurements and acceleration algorithms under typical operating conditions for this drive model," which is what I was guessing before for those million-hour numbers.

All in all, I am at this point only trying to track down and relay what I'm seeing about SAS vs SATA. From what I can tell, they are mostly the same, but SAS has more features (higher transfer rate, hot-swap capabilities, etc, etc,) HP says that SAS is more reliable, but I don't see anything on that other than the features I just mentioned. Lenovo seems to agree with that take, saying that the reliability between SAS and SATA is comparable,

[-] WhyJiffie@sh.itjust.works 1 points 3 days ago

Then I was not sure what you meant by this:

I don’t actually know if this is the right way to calculate it, but if for each disk you count the time separately, and add it together for a combined MTBF, then that is 20 out of the 136 MTBF years.

5 years of drive runtime for one drive. 20 "years" for 4 drives, 40 "years" for 8 drives. I say "years" because the way I mean it is like this: running 4 drives for 10 minutes is 40 minutes of combined drive runtime. running 4 drives for 5 years is 20 years of drive runtime. I think calculating it like this can be compared to MTBF. but again, I'm not totally confident that it really works this way.

All in all, I am at this point only trying to track down and relay what I'm seeing about SAS vs SATA.

I think it might be because SATA drives you normally run across, especially in laptops, are not the enterprise kind, but consumer drives built from cheaper components and simpler designs. and those are lower quality. while SAS drives are always enterprise grade.

but still, in my experience SATA drives can have a long life too. but it may be more unpredictable than enterprise SATA/SAS drives

HP says that SAS is more reliable

could be controller chips and cable quality. but also, SFF-8644 type SAS connector can be used to attach a drive to multiple HBA cards as I heard, maybe even multiple machines, for redundancy

[-] SpikesOtherDog@ani.social 1 points 3 days ago

Ok my 20 and your 20 are not the same.

I was saying the large numbers didn't make sense if you don't have a large fleet of drives. Say you have ten servers, each with ten drives, and the MTBF is 100 million hours (yay, easy math!). That means that half your drives will have failed after 100k hours, or 11 years of use.

Some of the sites I have been looking at are saying that this number will increase significantly because 8 hours of daily use would give you about 33 years of use.

I think I like the annualized failure rate better, but I don't think either really tell a great picture.

https://www.seagate.com/support/kb/hard-disk-drive-reliability-and-mtbf-afr-174791en/

https://ssdcentral.net/hddfail/

I would rather if the annualized rate were recalculated annually.

Regarding the controllers, that has been nagging at me this whole conversation. Most SATA peripheral cards do not have heat sinks, but most SAS cards do. The SAS cards at least have a more rugged appearance.

[-] Cort@lemmy.world 4 points 1 week ago

If you're getting used drives I'd recommend running the array in raid z2 or raid 6 for more parity drives

[-] mnemonicmonkeys@sh.itjust.works 0 points 1 week ago* (last edited 1 week ago)

Just keep in mind that the rebuild time for RAID 6 grows with drive size. A 6TB drive takes 1.4 days to rebuild, an 8TB drive takes 1.8 days, and a 10TB drive takes 2.3 days. So when a drive fails you might have a lot of downtime.

Here's the calculator I used in case anyone asks or has a more accurate option to recommend: https://cal67.calculator.city/raid-rebuild-time-calculator.html

Also, apparently this is a best case scenario. If you're still having the server run you could see rebuild times up to 10x this.

That being said, it you stagger your drive life (aka add or prematurely replace 1 drive per year) you can further minimize risk of 2-3 drives going down all at the same time, so a yearly rebuild in the background shouldn't be too bad

[-] Cort@lemmy.world 1 points 1 week ago

Raid 6 takes longer to rebuild but not twice as long, more like 45-50% longer. And raid 5 can't tolerate another drive failure during the rebuild. With new drives I do use raid 5 (z1), but with used drives I'd want that extra assurance.

[-] notagoblin@lemmy.world 4 points 1 week ago* (last edited 1 week ago)

Not your scenario really but a HBA will allow you to use SATA and SAS drives. Gives a bit more flexibility on price, especially with 2nd hand SAS drives.

My drives are currently in a box supplied by wires hanging out of the PC (Server!) casing but it would look much neater with a 4 bay hot swap cage built for the purpose.

I just wish I could get the full 12 Gb/s out of the couple of SAS drives I use :o(

[-] xinayder@infosec.pub 1 points 1 week ago

My idea was to run it in a mini PC, where I self host some services. I'm not sure if there are HBAs in a small form factor that would allow me to install it on the mini PC (GMKtec G10).

[-] u9000@lemmy.blahaj.zone 4 points 1 week ago

I've had good luck with Ebay. I got two 5TB enterprise drives that were SMART tested and from a trusted seller for $100. I'm in the US so YMMMV, but I'm pretty sure Ebay is global.

Also to dissuade your fears: Ebay actually has really good buyer protections now, and a good reputation system for sellers.

[-] irmadlad@lemmy.world 4 points 1 week ago

I haven't bought HDDs in a while, but back in the day you could find deals on stuff like WD My Books on sale and just shuck them.

[-] CmdrShepard49@sh.itjust.works 2 points 1 week ago

Also WD Elements and WD EasyStore. They're all the same drives inside.

[-] xinayder@infosec.pub 2 points 1 week ago

I see some WD Elements external HDDs and they seem to be cheaper than internal HDDs. I don't know if they're worth it, though. For example, a 5TB one is about $130.

[-] irmadlad@lemmy.world 2 points 1 week ago

Some other higher end selfhosters may not approve of them, but I'll tell you that shucked WD externals are mostly what I run on my computers/servers.

[-] xinayder@infosec.pub 2 points 1 week ago

Do they have some sort of speed degradation forced by the firmware? I read on Reddit that some of they may have a firmware that slows them down so they will never be as fast as an internal HDD. Did you notice something similar?

[-] irmadlad@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

Do they have some sort of speed degradation forced by the firmware?

That's a technical level that is beyond my realm of experience. I have not noticed any real world degradation of speed. However, My Book units are usually WD Green SMR consumer grade HDDs or high capacity drives that have lower sustained write performance than desktop CMR drives, so it may impact sustained/random small write workloads, but I have no data to support that either way.

[-] Dirk@lemmy.ml 3 points 1 week ago

AI data centers need RAM. HDDs are used for "the cloud". 1 terabyte per user need to be stored somewhere.

[-] irmadlad@lemmy.world 4 points 1 week ago

1 terabyte per user need to be stored somewhere.

x 8.4+/- billion.

[-] Decronym@lemmy.decronym.xyz 2 points 1 week ago* (last edited 3 days ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
NAS Network-Attached Storage
RAID Redundant Array of Independent Disks for mass storage
SATA Serial AT Attachment interface for mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

[Thread #224 for this comm, first seen 9th Apr 2026, 01:30] [FAQ] [Full list] [Contact] [Source code]

[-] Thorry@feddit.org 2 points 1 week ago

Yeah those refurb drives from eBay were the last good source. I got a bunch of them last year, 2 of them had issues but were replaced under warranty by the manufacturer. All of those seem to be either gone or not priced very well.

Doing anything PC related these days is very rough with prices being sky high. And even if you are willing to pay, there isn't a lot of good stuff to get. It sucks ass.

[-] Hiro8811@lemmy.world 2 points 1 week ago

I bought 4x12tb Seagate refurbished for around 500€ and 2x8tb WD red pro for 300€. Note the four drives were bought 1 year ago through Amazon and the WD reds were bought in August of 2025, although the site has shut down, but they still have WD warranty. Those people on reddit either found crazy good deals or speak out their ass.

That being said make sure the drives are CRM and that the shop you buy from offers warranty and that it will be around long enough to honour it in case of defects.

this post was submitted on 08 Apr 2026
63 points (97.0% liked)

Selfhosted

58553 readers
589 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS