this post was submitted on 21 Dec 2023
43 points (95.7% liked)

Linux

48153 readers
658 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

So I have a nearly full 4 TB hard drive in my server that I want to make an offline backup of. However, the only spare hard drives I have are a few 500 GB and 1 TB ones, so the entire contents will not fit all at once, but I do have enough total space for it. I also only have one USB hard drive dock so I can only plug in one hard drive at a time, and in any case I don't want to do any sort of RAID 0 or striping because the hard drives are old and I don't want a single one of them failing to make the entire backup unrecoverable.

I could just play digital Tetris and just manually copy over individual directories to each smaller drive until they fill up while mentally keeping track of which directories still need to be copied when I change drives, but I'm hoping for a more automatic and less error prone way. Ideally, I'd want something that can automatically begin copying the entire contents of a given drive or directory to a drive that isn't big enough to fit everything, automatically round down to the last file that will fit in its entirety (I don't want to split files between drives), and then wait for me to unplug the first drive and plug in another drive and specify a new mount point before continuing to copy the remaining files, using as many drives as necessary to copy everything.

Does anyone know of something that can accomplish all of this on a Linux system?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 10 months ago (1 children)

I just noticed https://lemmy.ml/u/[email protected] had proposed the same, but here's the same but with more words ;).

I would propose you try to split the data you have manually into logically separate parts, so that you could logically fit 0.8 TB on one drive, 0.4 TB on another, and maybe sets of 0.2TB+0.2TB on a third one. Then you'd have a script that uses traditional backup approaches with modern backup apps to back up the particular data set for the disk you have attached to the system. This approach will allow you to access painlessly modern "infinite increments" backups where you persist older versions of data without doing full and incremental backups separately. You should then write a script to ensure no important data is forgotten to be backed up and that there are no overlapping backups (except for data you want to back up twice?).

For example, you could have a physical drive with sticker "photos and music" on it to back up your ~/Photos and ~/Music.

At some point some of those splits might become too large to fit into its allocated storage, which would be additional manual maintenance. Apply foresight to avoid these situations :).

If that kind of separation is not possible, then I guess tar+multi volume splitting is one option, as suggested elsewhere.

[–] [email protected] 1 points 10 months ago* (last edited 10 months ago)

That is actually what I'm currently doing, in fact my file server is already organized in this way, but i personally don't like it for offline backups because it still forces me to play digital tetris and work out what directories will fit on what drive, and there is also the issue that some of my directories, particularly the one containing all the lossless files from my (hobby) photography work, is getting close to growing larger than 1 TB at this point (I do a ton of urban and industrial photography and I honestly might have most of the interesting parts of my city documented at this point, plus different versions the same scene with different settings which is how I ended up with so much data). Though I suppose I can just split it into separate years instead of just one huge directory. I'm personally hoping for something that can automate this process so I don't have to consciously keep track of it as much (I don't trust my brain sometimes), currently experimenting with some of the suggested solutions, maybe I'll find one that works better, if not then I'll stick to the method you mentioned. Thank you for the suggestion though!