view the rest of the comments
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
-
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Realistically, is that a factor for a Microsoft-sized company, though? I'd be shocked if they only had a single layer of redundancy. Whatever they store is probably replicated between high-availability hosts and datacenters several times, to the point where losing an entire RAID array (or whatever media redundancy scheme they use) is just a small inconvenience.
Fairly significant factor when building really large systems. If we do the math, there ends up being some relationships between
Basically, for a given risk acceptance and total system size there is usually a sweet spot for disk sizes.
Say you want 16TB of usable space, and you want to be able to lose 2 drives from your array (fairly common requirement in small systems), then these are some options:
The more drives you have, the better recovery speed you get and the less usable space you lose to replication. You also get more usable performance with more drives. Additionally, smaller drives are usually cheaper per TB (down to a limit).
This means that 140TB drives become interesting if you are building large storage systems (probably at least a few PB), with low performance requirements (archives), but there we already have tape robots dominating.
The other interesting use case is huge systems, large number of petabytes, up into exabytes. More modern schemes for redundancy and caching mitigate some of the issues described above, but they are usually onlu relevant when building really large systems.
tl;dr: arrays of 6-8 drives at 4-12TB is probably the sweet spot for most data hoarders.
I'd imagine they are using ceph or similar.
You have disk level protection for servers. Server level protection for racks. Rack level protection for locations. Location level protection for datacenters. Probably datacenter level protections for geographic regions.
It's fucking wild when you get to that scale.
True, but that's going to really be pushing your network links just to recover. Realistically, something like ZFS or a RAID-6 with extra hot spares would help reduce the risks, but it's still a non trivial amount of time. Not to mention the impact to normal usage during that time period.
Network? Nah, the bottleneck is always going to be the drive itself. Storage networks might pass absurd numbers of Gbps, but ideally you'd be resilvering from a drive on the same backplane, and SAS-4 tops out at 24 Gbps, but there's no way you're going to hit that write speed on a single drive. The fastest retail drives don't do more than ~2 Gbps. Even the Seagate Mach.2 only does around twice that due to having two head actuators.
100%. But the post i was responding to was talking about recovering a failed array from other copies, not locally.