Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comparision to backy2? #27

Open
gardar opened this issue Feb 5, 2018 · 5 comments
Open

Comparision to backy2? #27

gardar opened this issue Feb 5, 2018 · 5 comments

Comments

@gardar
Copy link

gardar commented Feb 5, 2018

How does barc compare to backy2? https://github.com/wamdam/backy2

backy2 is obviously not tied to proxmox as barc is, but how about other features?
backy2 documents talk about scrubbing, is this something that's taken care of in barc?

@franklupo
Copy link
Member

Hi,
what is scrubbing?

Best regards

@gardar
Copy link
Author

gardar commented Feb 7, 2018

Scrubbing is to ensure the backups is a error correction technique to ensure old backups are healthy, so in case the something happens to the backed up disk image (a failed disk or memory or something else) then next time the backup job is run the scrub will fix the that error.

This is also explained in the readme for backy2

Every backed up block keeps a checksum with it. When backy scrubs the backup, it reads the block from the backup target storage, calculates it's checksum and compares it to the stored checksum (and size). If the checksum differs, it's most likely that there was an error when storing or reading the block, or by bitrod on the backup target storage.

Then, the block and the backups it belongs to, are marked 'invalid' and the block will be re-read for the next backup version even if rbd diff indicates that it hasn't been changed.

Further info here: http://backy2.com/docs/scrub.html#why-scrubbing-is-needed

@franklupo
Copy link
Member

Hi,
The backup process is based on snapshots of ceph and local files.
Except for the first image that is complete, subsequent backups are diff followed by the previous snapshot, and so on. If the local file or the ceph snapshot do not hesitate, the backup stops because something is wrong. This case is serious because the references have been deleted.
With --keep the number of copies wanted is maintained, the number reached is merged with the first.
This mechanism should not give any kind of problem mentioned above.

The idea of a barc is to immediately give a ready-to-use image through assemblies to be used even outside ceph in case of problems.

Best regards

@gardar
Copy link
Author

gardar commented Feb 12, 2018

Thanks for clearifying!

How about other backy2 features?

@flames
Copy link
Contributor

flames commented May 15, 2019

Hello,
I am running eve4pve barc for a while now and I am very satisfied. Thanks Daniele!
Now, out of curiosity I tested backy2 and got interesting results. Please be aware I am only comparing features that I use my self, which is a Proxmox/Ceph cluster and a NFS backupstorage.

eve4pve barc pros:

  • fully automated, a simple cronjob to make your backups
  • simple dialog driven restores
  • simple to extend or to fix something
  • very fast, since it uses the very efficient ceph diff feature (** but see the related contra)
  • good documented

eve4pve barc contras:

  • does not clean up old snapshots automatically on source ceph (**see the related pro)
  • no deduplication (** this is ok, barc is a pretty simple helper script and yet good one! it does not pretend to be a full backup solution! barc relates on ceph api. please dont consider this point as weightful)

backy2 pros:

  • deduplication! backy2 deduplicates already before it sends and writes the backup to the destination by comparing only metadata and size stored in sqlite db
  • deduplication is effective and fast (compared with zfs dedup its lightyears ahead in speed)!
  • scrubbing (consistency check of backups)
  • ability to run only full backups (skipping diffs) and due to fast and effective deduplication on source side those backups take only a little longer than barc diffs (especially when diffs reached max and barc starts to merge older diffs, which is slow). in this mode you can purge alls ceph snapshots directly a diskimage is backed up, so your ceph doesn't waste space for older snapshots
  • good documented

backy2 contras:

  • no built in automation at all (execpt a sample script in the docs). still, due to the good documentation I had no issues to automate it for my concerns in under 20 minutes
  • in diff mode it also needs to keep old snapshot, but only the last one. A little bit harder do automate.

please let me append also a question here:
barc creates a snapshot for every diff. it does not clean up the snapshots after successful backup.
are those snapshots incremental deltas, so barc relates on all of them every night, or can i delete all but the very last snapshot per diskimage? In case of reset barc won't work, i need to move the old backups to different location... which is my default way: I run one initial and 6 diffs, then start over with barc reset and move old week backup to different folder every sunday.
The problem is, my ceph can't hold more than 7-8 snapshots per diskimage, it gets full.
Thanks in adavance

And, Daniele, please let me to leave a suggestion:
You made an awesome script package with eve4pve, but it would be even more awesome, to write another script package which works with backy2. I have had a look at your site, so I know, you would definitely draw your benefits from it :))
ceph + backy2 + an automation like barc would be a bomb!
Also I would try to help you, since we are more or less in the same boat.

Kind regards,
Arthur

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants