I am trying to ‘improve’ my backup strategy and finally get the various backups to Amazon S3 buckets. I would prefer to actually create the backups locally and then after they are done move them over to the S3 bucket.
Currently I run a full backup of all servers on a Sunday night and every other night I do an incremental. I also run a full Virtualmin settings backup on a Sunday an hour after the server backups. All of the backups are using strftime replacements for ‘versioning’.
I could not find any details about the scripts ‘internal info’ in the documentation. Are there any preset bash variables I can utilize, or is it a matter of making the script clever enough to find which is the latest backup it has to transfer?
OR is there a even better strategy I have not thought about???
within that script I know where the backup is located and it’s file name, so it’s just a case of then moving/copying it to wherever you want. To help debugging your script the output from the script is saved in miniserv.err weather it fails or not so you have a reference to look at. with regards to file name to transfer just make it something like this /root/backup/%d-%m-%y for a directory then just loop through the files in that directory so you don’t need any variables all you need to know is the date, which is very simple to get in a script. Then get virtualmin to delete the older local backups using the gui
Geez how could I have missed that!?!
I have implemented a little script that combines the three backup files (the actual tar.gz, .dom and .info) into a single file and that is then transferred to the relevant S3 bucket - depending on whether it is a full or incremental backup.
Thanks team!