Backup strategy?

Hi

I’d like to get ideas or suggestion on how to improve my backup strategy for Virtualmin as I encounter a little issue of timing. I have setup my system to do a full backup on sunday night, and all other days an incremental backup ! I have setup backup to transfer each virtual server individually to speed up process at max but as volume of customer files increase the sunday backup collides more and more often with monday one ! Volume of full backup is around 1To right now and it’s done between two servers in same datacenter with gigabit link in between and both servers use RAID.
Any ideas how to solve that problem ?

Thanks

Vincèn

Backup is a complex subject and each admin sets it up according to resources available and personal preferences. With the 1TB backup that you have @vincen, I would recommend something like Borg Backup which has block level deduplication and a full feature set besides.

Once it is set up correctly, you could run backups hourly, if you liked, due to the more efficient algos and give your clients the assurance that you could restore their account to as it was at any hour of the day. All this at less than the time, network bandwidth and disk space (block level dedup magic) than is being used at present.

So the main bottle neck is 1 GB upload speed? Or is it the speed at which server1 is able to zip up over ssh to server2? Which typically isnt 1Gb.
If 1Gb is the bottleneck…
One option, 10Gb network cards are getting cheap, if the DC supports it.
Or bond 2 or more 1 Gb network ports together for 2,3,4,5+ Gb connections. This is cheapest option as most servers have 2, 1 Gb ports anyway.
4th, some sort of external raid to back-plane setup between the 2 servers. Server 1 would see the drives in server 2 as if they where local. Link would be at drive speeds, not net speeds.

If zipping over ssh is the bottleneck…
have backups stored local on the virtmin server. Then rsync that folder to external disks on a separate schedule. Could even have virtmin run the rsync command right after backups run. This doesn’t really alleviate the network bottleneck but does allow virtualmin timing to work when incrementals need to see when the last backup was run in order to figure out what files to back up. But this would require local space. Although you could probably have it delete all but the last week locally since rsync wont delete missing remote files unless told to do so.

Thanks for suggestion @calport but I need something usable with Virtualmin so db for example are correctly backedup :wink:

From what I have seen on the 21 hours it takes to do the backup, 18 hours are zipping and building archives files. From what I know Virtualmin doesn’t zip over SSH, it’s already building the archive locally before it transfers it by FTP at backup server. So not so sure here which way to go to solve that problem…

Are you implying that something like Borg Backup is not ‘usable with Virtualmin so db for example are correctly backedup’?

If you try Borg, you might find that it does a more comprehensive backup with less CPU utilization, storage and bandwidth than you currently provide for backup.

Yep from what I see on borg Website it won’t be able to backup MySQL db in such a way they are recoverable after ! DB needs to be exported properly to be backuped ! I have not found any Webmin/Virtualmin modules for Borg so thanks for speaking about it, but I think it’s in no way a good solution to backup Virtualmin files and virtual webservers in it :frowning:

Your logic is flawless.

My error in understanding the issue and how virtmin backs up.
Did some testing…
So I tried ‘tar only’ for backups, then tried tried with gzip --fast, then tried with pigz --fast.
Tar only = 9.04G took 21:04 run time
Gzip --fast was 7.7G and 20:25 run time
Pigz --fast was 7.7G and 17:30 run time

Gzip is single threaded, saves space but not much time. Pigz is multi threaded, saves same space and a little more time.
I had to to install pigz and the settings I’m changing are in virtmin, system settings, virtmin configuration, backup and restore. Pigz seems to be about 15% faster, possibly even more on hyper-threaded cpus.

Use ssh automatically for backups anywhere an simply bash script with cron with any remote host will do… Set your timing and just deployed best strategy you could aske.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.