datahoarder

6715 readers
33 users here now

Who are we?

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

We are one. We are legion. And we're trying really hard not to forget.

-- 5-4-3-2-1-bang from this thread

founded 4 years ago
MODERATORS
26
 
 

cross-posted from: https://beehaw.org/post/15404535

Data: https://archive.org/details/gamefaqs_txt

Mirror upload for faster download, 1 Mbit (expires in 30 days): https://ufile.io/f/r0tmt

GameFAQs at https://gamefaqs.gamespot.com hosts user created faqs and documents. Unfortunately they are baked into the HTML webpage and cannot be downloaded on their own. I have scraped lot of pages and extracted those documents as regular TXT files. Because of the sheer amount of data, I only focused on a few systems.

In 2020, a Reddit user named "prograc" archived faqs for all systems at https://archive.org/details/Gamespot_Gamefaqs_TXTs . So most of it is already preserved. I have a different approach of organizing the files and folders. Here a few notes about my attempt:

  • only 17 selected systems are included, so it's incomplete
  • folder names of systems have their long name instead short, i.e. Playstation instead ps
  • similarly game titles have their full name with spaces, plus a starting "The" is moved to the end of the name for sorting reasons, such as "King of Fighters 98, The"
  • in addition to the document id, the filename also contain category (such as "Guide and Walkthrough"), the system name in short "(GB)" and the authors name, such as "Guide and Walkthrough (SNES) by BSebby_6792.txt"
  • the faq documents contain an additional header taken from the HTML website, including a version number, the last update and the previously explained filename, plus a webadress to the original publication
  • HTML documents are also included here with a very poor and simple conversion, but only the first page, so multi page HTML faqs are still incomplete
  • no zip archives or images included, note: the 2020 archive from "prograc" contains false renamed .txt files, which are in reality .zip and other files mistakenly included, in my archive those files are correctly excluded, such as nes/519689-metroid/faqs/519689-metroid-faqs-3058.txt
  • I included the same collection in an alternative arrangement, where games are listed without folder names for the system, this has the side effect of removing any duplicates (by system: 67.277 files vs by title: 55.694 files), because the same document is linked on many systems and therefore downloaded multiple times
27
 
 

Hey guys, so it seems that Linkwarden isn't as good as I was hoping, since some websites will throw up a cookie popup or some other screen that basically prevents the capture.

Firefox Screenshot seems to work well, but it saves a PNG, which isn't really text searchable.

FF's "save page as..." feature seems to break things when viewing them back.

Save to PDF is another option, and that seems to be decent.

I'm not looking to copy entire websites, but I like to save web pages for later reference (i.e. instructions/specs).

I use Synology Note Station, but they don't have a web clipper for Firefox...

I'm fine with using a folder structure to store files, despite not being totally ideal when compared to Linkwarden.

Does anyone have any other suggestions that perhaps I've missed? Nothing too complicated... ideally, as simple as a button click would be great.

28
29
 
 

YouTube is cracking down Adblocker and they may never work in a year or so.

I don't watch YouTube that much and most of the time I watch the same thing. So I am thinking of mirroring the videos I watch to other platforms. But I don't know which. I was just thinking of ok.ru. I don't know if they respond to DMCA requests.

Did anyone do something similar?

30
 
 

Running GParted gives me an error that says

fsyncing/closing dev/sdb: input/output error

Using Gnome Disk Utility under the assessment section it says

Disk is OK, one bad sector

Clicking to format it to EXT4 I'm getting a message that says

Error formatting volume

Error wiping device: Failed to probe the device 'dev/SDB' (udisks-error-quark, 0)

Running sudo smartctl -a /dev/SDB I get a few messages that all say

... SCSI error badly formed scsi parameters


In terms of the physical side I've swapped out the SATA data and power cable with the same results.


Any suggestions?

Amazon has a decent return policy so I'm not incredibly concerned but if I can avoid that hassle it would nice.

31
 
 

a few days ago i saw a post on the reddit datahoarder community asking how to backup keys and other small files for a long time.
it reminded me of a script i made some time ago to save my otp secrets in case of loss of device or a reenactment of the raivo otp incident,
so i decided to make it public on github, hope someone here finds it useful

github.com/Leviticoh/weedcup

the density is not great, about 1kB per A4 page, but it can recover from losing up to half of the printed surface and, if stored properly, paper should last very long

32
 
 

Basically title!

I want to run it through my NAS to free up some space.

Tha ks in advance.

33
8
submitted 3 months ago* (last edited 3 months ago) by [email protected] to c/[email protected]
 
 

I read something about once-reliable sites that would tell you the best [tech thing] now not giving legit reviews, being paid to say good things about certain companies, and I do not remember where I read that or which sites, so I figured I'd bypass the issue and ask people here. I'm pretty new to anything near the level of complexity and technical details that I see on datahoarder communities. I know about the 321 backup rule and that's it. This is me trying to find something to hold copy 3 of my data.

34
 
 

i want to buy a few hard drives for backups.

What is the most reliable option for longetivity? i was looking at the wd ae, which they claim is fit for this purpose, but knowing nothing about hard drives, I wouldnt know if it was a marketing claim..

35
 
 

cross-posted from: https://lemmy.world/post/17689141

I'll just save them in this folder so that I can totally come back later and read them.

36
37
38
 
 

I was considering making a 30+ TB NAS to simplify and streamline my current setup but because it's a relatively low priority for me I am wondering is it worth it to hold off for a year or two?

I am unsure if prices have more or less plateaued and the difference won't be all that substantial. Maybe I should just wait for Black Friday.

For context it seems like two 16TB HDD would cost about $320 currently.


Here's some related links:

  • This article by Our World in Data contains a chart with how the price per GB has decreased overtime.

  • This article by Tom's Hardware talks about how in July 2023 SSD prices bottomed out before climbing back up predicted further increases in 2024.

39
 
 

Are they worth considering or only worth it at certain price points?

40
 
 

cross-posted from: https://slrpnk.net/post/10273849

Vimms Lair is getting removal notices from Nintendo etc. We need someone to help make a rom pack archive can you help?

Vimms lair is starting to remove many roms that are being requested to be removed by Nintendo etc. soon many original roms, hacks, and translations will be lost forever. Can any of you help make archive torrents of roms from vimms lair and cdromance? They have hacks and translations that dont exist elsewhere and will probably be removed soon with ios emulation and retro handhelds bringing so much attention to roms and these sites

41
 
 

I've been working on this subtitle archive project for some time. It is a Postgres database along with a CLI and API application allowing you to easily extract the subs you want. It is primarily intended for encoders or people with large libraries, but anyone can use it!

PGSub is composed from three dumps:

  • opensubtitles.org.Actually.Open.Edition.2022.07.25
  • Subscene V2 (prior to shutdown)
  • Gnome's Hut of Subs (as of 2024-04)

As such, it is a good resource for films and series up to around 2022.

Some stats (copied from README):

  • Out of 9,503,730 files originally obtained from dumps, 9,500,355 (99.96%) were inserted into the database.
  • Out of the 9,500,355 inserted, 8,389,369 (88.31%) are matched with a film or series.
  • There are 154,737 unique films or series represented, though note the lines get a bit hazy when considering TV movies, specials, and so forth. 133,780 are films, 20,957 are series.
  • 93 languages are represented, with a special '00' language indicating a .mks file with multiple languages present.
  • 55% of matched items have a FPS value present.

Once imported, the recommended way to access it is via the CLI application. The CLI and API can be compiled on Windows and Linux (and maybe Mac), and there also pre-built binaries available.

The database dump is distributed via torrent (if it doesn't work for you, let me know), which you can find in the repo. It is ~243 GiB compressed, and uses a little under 300 GiB of table space once imported.

For a limited time I will devote some resources to bug-fixing the applications, or perhaps adding some small QoL improvements. But, of course, you can always fork them or make or own if they don't suit you.

42
43
 
 

I'm looking at my library and I'm wondering if I should process some of it to reduce the size of some files.

There are some movies in 720p that are 1.6~1.9GB each. And then there are some at the same resolution but are 2.5GB.
I even have some in 1080p which are just 2GB.
I only have two movies in 4k, one is 3.4GB and the other is 36.2GB (can't really tell the detail difference since I don't have 4k displays)

And then there's an anime I have twice at the same resolution, one set of files are around 669~671MB, the other set 191 each (although in this the quality is kind of noticeable while playing them, as opposed to the other files I extract some frames)

What would you do? what's your target size for movies and series? What bitrate do you go for in which codec?

Not sure if it's kind of blasphemy in here talking about trying to compromise quality for size, hehe, but I don't know where to ask this. I was planning on using these settings in ffmpeg, what do you think?
I tried it in an anime at 1080p, from 670MB to 570MB, and I wasn't able to tell the difference in quality extracting a frame form the input and the output.
ffmpeg -y -threads 4 -init_hw_device cuda=cu:0 -filter_hw_device cu -hwaccel cuda -i './01.mp4' -c:v h264_nvenc -preset:v p7 -profile:v main -level:v 4.0 -vf "hwupload_cuda,scale_cuda=format=yuv420p" -rc:v vbr -cq:v 26 -rc-lookahead:v 32 -b:v 0

44
45
 
 

I was so confident that WhatsApp was backing itself up to Google ever since I got my new pixel but I just wasn't. Then yesterday I factory reset my phone to fix something else and I lost it all. Years worth of chats from so many times in my past just aren't there, all my texts with my mom and my family, group chats with old friends... I can't even look at the app anymore, I'll never use Whatsapp as much as I used to. I just don't feel right with this change. There's no way to get those chats back and now it doesn't feel like there's any point backing up WhatsApp now! I really wanna cry like this is so unfair!! And all I had to do was check Whatsapp before I did a factory reset.. the TINIEST THING I could have done and prevented this and I didn't fucking do it!!!!!!!

How do I get past this?

46
47
 
 

With Google Workspace cracking down on storage (Been using them for unlimited storage for years now) I was lucky to get a limit of 300TBs, but now I have to actually watch what gets stored lol

A good portion is uh "Linux ISOs", but the rest is very seldom (In many cases last access was years ago) accessed files that I think would be perfect for tape archival. Things like byte-to-byte drive images and old backups. I figure these would be a good candidate for tape and estimate this portion would be about 100TBs or more

But I've never done tape before, so I'm looking for some purchasing advice and such. I seen from some of my research that I should target picking up an LTO8 drive as it's compatible with LTO9 for when they come down in price.

And then it spiraled from there with discussions on library tape drives that are cheaper but need modifications and all sorts of things

48
 
 

Run this javascript code with the document open in the browser: https://codeberg.org/dullbananas/google-docs-revisions-downloader/src/branch/main/googleDocsRevisionDownloader.js

Usually this is possible by pasting it into the Console tab in developer tools. If running javascript is not an option, then use this method: https://lemmy.ca/post/21276143

You might need to manually remove the characters before the first { in the downloaded file.

49
 
 
  1. Copy the document ID. For example, if the URL is https://docs.google.com/document/d/16Asz8elLzwppfEhuBWg6-Ckw-Xtf/edit, then the ID is 16Asz8elLzwppfEhuBWg6-Ckw-Xtf.
  2. Open this URL: https://docs.google.com/document/u/1/d/poop/revisions/load?id=poop&start=1&end=1 (replace poop with the ID from the previous step). You should see a json file.
  3. Add 0 to the end of the number after end= and refresh. Repeat until you see an error page instead of a json file.
  4. Find the highest number that makes a json file instead of an error page appear. This involves repeatedly trying a number between the highest number known to result in a json file and the lowest number known to result in an error page.
  5. Download the json file. You might need to remove the characters before the first {.

I found the URL format for step 2 here:

https://features.jsomers.net/how-i-reverse-engineered-google-docs/

I am working on an easy way. Edit: here it is https://lemmy.ca/post/21281709

50
view more: ‹ prev next ›