this post was submitted on 23 Nov 2023
1 points (100.0% liked)
Data Hoarder
168 readers
1 users here now
We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time (tm) ). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Anyone have any ideas for checking for this issue in existing backups?
Not confirmed but promising - https://github.com/openzfs/zfs/issues/15526#issuecomment-1810800004 and https://github.com/openzfs/zfs/issues/15526#issuecomment-1810819382
Thank you. Upon reading further, the state of block cloning seems to be the major variable as to whether any corruption has occurred. However, there appears to remain a non-zero chance that such corruption could occur regardless of block cloning and dates back to 2.1.4/2.1.5 which were released in March/June of 2022.
Script at #15526 can somewhat check for a hole in the first 4K bytes of the file, but gives false positives, if script produces a syntax error in the last line - replace /bin/sh with /bin/bash or whatever the location of the BASH is.
Used it on part of my collection, found several zeroed-out files, but I strongly suspect they were full of zeroes before they hit ZFS, at least some files from 2009 were full of zeroes. Script gave multiple false positives (and one true positive on fully-zero file) on .iso files, suspect that they miss boot record.
Thank you. I've been keeping an eye on the thread to see if any consensus emerges regarding any better understanding of how the corruption manifests itself. It appears there is a possibility that a portion could be zeroed out and then new data written over it, giving the impression that all is well, but where the file is obviously still corrupt. It seems the best method is to have a list of checksums from known good files, but that obviously requires previous action that may or may not have occurred (obviously, most people never anticipated this and thus have no such list).
I was able to copy zipped 400GB zipped dump from the torrent, checksum it beforehand and after the move, no failures so far, at least at the beginning
It appears the issue arises more when a ZFS file system is being used in a primary nature; e.g., reading and writing to it directly as a part of some active operation. Are you using it as a backup/archive, or as a primary partition where your OS and applications are writing to it directly? If it's the former, it would seem you're much more unlikely to encounter the issue.