this post was submitted on 28 Nov 2023
1 points (100.0% liked)

Data Hoarder

24 readers
1 users here now

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time (tm) ). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

founded 10 months ago
MODERATORS
 

So just getting around to checking my logs on my backup server, and it says that I have a permanently damaged file that's un-repairable.

How is this even possible on a raidz2 volume where each member shows zero problems and no dead drives? Isn't that whole point of raidz2, so that if one (er, two) drives have a problem the data is recoverable? How can I figure out why this happened and why it was unrecoverable, and most importantly, prevent it in the future?

It's only my backup server and the original file is still A-OK, but I'm really concerned here!

zpool status -v:

3-2-1-backup@BackupServer:~$ sudo zpool status -v
pool: data_pool3
state: ONLINE
status: One or more devices has experienced an error resulting in data
    corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
    entire pool from backup.
 see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 06:59:59 with 1 errors on Sun Nov 12 07:24:00 2023
config:

    NAME                        STATE     READ WRITE CKSUM
    data_pool3                  ONLINE       0     0     0
      raidz2-0                  ONLINE       0     0     0
        wwn-0x5000ccaxxxxxxxx1  ONLINE       0     0     0
        wwn-0x5000ccaxxxxxxxx2  ONLINE       0     0     0
        wwn-0x5000ccaxxxxxxxx3  ONLINE       0     0     0
        wwn-0x5000ccaxxxxxxxx4  ONLINE       0     0     0
        wwn-0x5000ccaxxxxxxxx5  ONLINE       0     0     0
        wwn-0x5000ccaxxxxxxxx6  ONLINE       0     0     0
        wwn-0x5000ccaxxxxxxxx7  ONLINE       0     0     0
        wwn-0x5000ccaxxxxxxxx8  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

    data_pool3/(redacted)/(redacted)@backup_script:/Documentaries/(redacted)
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 9 months ago

It's definitely not the recent ZFS bug that others mentioned here. Simply to the fact that when corruption occurs due to that bug it cannot be identified, the filesystem is consistent.

https://discourse.practicalzfs.com/t/recurring-permanent-errors-in-healthy-zpool/919/5 is this relevant perhaps?