-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation improvement request #217
Comments
Hi @Eric678 The local path is not saved in the archive, its specified with the The
That does seem like a special case but also worth a mention in the docs. FWIW, I've done some testing with Ext4, XFS and Btrfs and the only time it seemed to come near to causing inode exhaustion was when the archive chunk size was set to 64KB and the data was especially compressible (hence very small files) and no deduplication was used (dedup has an effect of conserving inodes, if there is substantial duplication). The two obvious ways out of this is doing mkfs with the default ratio of inodes or creating a new Wyng archive with larger chunk size; a third route is to enhance Wyng with an option that lets you set the maximum archive size.
No, it doesn't do this. It could conceivably provide a warning when a limit is being approached, or warn anytime an fs doesn't have at least 1 inode per 16KB... but most archive utilities (like tar) which can extract large number of small files don't do this.
It will fail atomically per volume, meaning you may have some volumes completed for a given session number, while the session doesn't exist for other volumes including the vol being backed up when the error occurred. There will also be (in that one archive volume) some unassociated data that will be deleted the next time Wyng runs. |
@tasket thank you for a prompt response. |
@Eric678 Oh, I see. When using You don't have to use
FWIW, this is the first time I recall someone complaining about inode exhaustion even though the sparsebundle-like format has been used in Wyng since 2018. Apple has employed this sparsebundle volume strategy in Time Machine to accelerate backup processes, and that's where I got the idea. One workaround you could do is to create a disk image on the backup drive (i.e. |
@tasket thanks, so if you want to actually restore a backup of a block device you have to use --saveto. That should be mentioned in the section on receive. I notice the --saveto option also appears as --save-to in the doco - which is correct, or both? |
@tasket should I post up any more what I consider bugs? No comments or questions so far. I have several more and have managed to brick my main archive doing things that are what I would consider perfectly normal. Odd since you have had this up for 7 years? Still is beta though, and extremely fragile. Must be something different about my environment. While I have had recent problems with R4.2.3, I do now have a patched together reliable system - does not show any unexpected software faults except in wyng-backup. Don't dismiss anything I report as a hardware fault. |
The archive chunk size has no relationship with TLVM chunk sizes, so there's no compatibility concerns you have to worry about. Wyng uses an internal bitmap for each vol to flag which chunks may have changed; it will step through the bitmap at different rates depending on the LVM or fs block size. There is no 'zero chunk' to link to. Zero status is just a marker within the archive metadata. Incidentally, the more hardlinks are used in deduplication, the fewer inodes are used by the archive. And RAID configs should have no effect on compatibility, but of course filesystem configs can. Using fs options that change the write order or priority of journals and data can negatively affect any data format; these options were meant for high-redundancy, high-availability systems and the added throughput you get may not be worth the cost in data corruption. In my experience, both Ext4 and XFS can lose or destroy data this way (moreso with XFS). |
New user here, great work - sorely needed for Qubes, a couple of issues:
There is no mention of receiving "other" volumes, question: is the path bundled into the archive or does every vol receive require a separate wyng command with --saveto? (Guessing yes, should be documented) How to format the json for this scenario?
The other is I feel there should be a paragraph right up front describing the wyng archive storage format uses a huge number of inodes. In my case backup drives are mkfs'd with a relatively small number, has never been a issue. A single 10T offline backup drive's 1.2M inodes are wiped out by a 70GB wyng archive, which is not big, using the default chunk size of 128K. Does bring up another question, does wyng send check that there are enough inodes when it is checking space in advance? If it runs out, does it fail atomically and roll back the whole session? Just the last part vol?
Thanks!
The text was updated successfully, but these errors were encountered: