![]() ![]() I don’t want to ever pay for the storage of the same data twice. Block-level deduplication at the cloud storage level A backup provider should not require storing any encryption keys, even in escrow. In other words, a compromise of the backup storage itself should not disclose any of my data. Backup encryptionĪll backups should be stored with zero-knowledge encryption. Surprisingly, this led to us updating our security handbook to remove recommendations for both Backblaze and Carbonite as their encryption support is lacking. If any of these were missing, I went on to the next candidate on my list. These are the basics I expect from any backup software today. I was pretty unhappy with how they handled the transition, so I started investigating alternative software and services. Also, as someone paying month-to-month, they gave me 2 months to migrate to their new service or cancel my account, losing access to my historical cloud backups that may only be 3 or more months old. CrashPlan removed the option for local computer-to-computer or NAS backups, which is key when doing full restores on a home internet connection. I’d been a happy user for many years, but this announcement came along with more than just a significant price increase. So, it just runs one rclone sync after the other, until one of them finishes successfully not exactly what I wanted, which would be to stop the loop after an rclone sync runs without copying anything (because the local repo was not modified, ie, the parallel borg create command has finished.A few months ago, CrashPlan announced that they were terminating service for home users, in favor of small business and enterprise plans. multi-thread-cutoff=1k -multi-thread-streams=8 \ ![]() I think you meant rclone sync, right? Otherwise, lots of useless gunk would remain on the rclone remote after each borg prune.Īnyway, that's exactly how I'm proceeding here: I finished setting up borg last night and since then I'm running a borg create to back that 62M files / 32T Bytes source to a local Borg repo, and simultaneously I'm running rclone sync on a loop copying it to Google Drive, something like this: while ! rclone -v -checkers=12 -transfers=2 \ ![]() You can do a borgbackup to local disk then use rclone copy to sync it to google. So, despite having invested literally months trying to get restic working on my setup, I'm sadly being forced to give up on it, and I'm moving to borg (BTW, that's how I found this topic). Another patch to likewise make restic restore workable for large repos is stuck for almost as long, despite lots of reports (including mine) that it's working perfectly and it only needs to be approved for merging. restic development is pretty much stuck right now: I'm not complaining (on the contrary, I'm grateful to fd0 and the other restic developers for all the time they spent on the project) but it's simply not moving forward for example, a patch to make restic prune minimally workable in large-repo situations is stuck for over 2 months now, except for people reporting repo corruption issues.restic prune currently takes a ton of time and uses a ton of memory: even with on an 8-vCPU 128GB RAM Google Compute node I created specifically for the prune, after 2 full days running, it aborted with OOM.restic prune is mandatory: if you let your restic repo run too large (eg, a month of daily backups, with just ~1M files and ~50GB changing on the source), your remote repo will get corrupted (ask me how I know.restic backup uses a ton of memory when you have a large number of files (to the point of 64GB - yes, you read that correctly: 64 gigabytes - of RAM not being enough).I strongly recommend against using restic right now, at least to anyone trying to back up a large number of bytes and a large number of files (I have ~60M files here, occupying ~32TB of space on the source file systems being backed up), because: ![]() You could also use (for example) restic which can use rclone as a backend to backup to drive. ![]()
0 Comments
Leave a Reply. |