backing up the server

Hi,

How do I do a complete backup of my webmin\virtualmin server?
I have noticed several backup options - backing up virtual server and backing up settings of webmin server configuration. But is there a way to get one file and do complete restore on another server?
For minimum downtime

Howdy,

If you wanted a bare metal recovery tool, you would need some sort of imaging software.

Many people just use the Virtualmin backups in order to backup their Virtual Servers. If you back all those up, if something happened to your server, all you’d have to do is reinstall Virtualmin, and then import those backups.

Some people also perform a backup of /etc/, if they make system config changes to their server.

-Eric

But,

If server fails, if I restore Virtualmin backup, what about webmin?

Theoretically, Friend would provide location for my stand by server in his server room.
I install Virtualmin, set up automatic backup to standby location…
But what about webmin, all the firewall rules… services that I have disabled for more ram?
This could be enabled via webmin config backup?

If yes, that is neat.

Webmin config and Virtualmin config are not necessarily the same.

See attachment for features included in Virtualmin backups. I schedule my Virtualmin backups for 2am every night, as it creates considerable CPU load on the server. MySQL has a habit of crashing if it doesn’t have enough i/o, so use caution here if you host MySQL on the same server.

Webmin backups are ALSO required, if you have any custom configuration you care about. My infrastructure is just LAMP sites, and there isn’t a lot of specialization on the system level, so I just schedule these as weekly backups.

The only difficult part about restoring these backups is that they require a running OS/Virtualmin/Webmin installation, so I just capture an image of the raw system to deploy as needed. From there, I restore my Webmin backup then my Virtualmin backup (which takes awhile, unfortunately … decompressing lots of tarballs is no small task), switch DNS to the rollover server, and I’m back in business.

Thank you Jesse!

That is really a smooth process to restore to a full working server.
I will set up another VPS server on another location and push backups there, i saw an option.
My sites are not big, below 500MB both.

Thanks again.

G.

If you have sensitive information, be careful with how you transfer. I’ve noticed that backing up to s3 uses a LOT of system resources (i/o intensive, storage-intensive, cpu-intensive, and network-intensive … and slow), and I don’t think that traffic is encrypted (standard http, I believe?). Any other option would be better, but I just back up locally then rsync to another server so I have the best control over how it’s transferred.

Howdy,

Yeah, the process of creating tar/gzip based archives can be resource intensive, though sending them to Amazon’s S3 service shouldn’t be more intensive than sending them to other destinations.

Files submitted to S3 are sent over the HTTPS protocol.

Additionally, if you’re using Virtualmin Pro, you could opt to encrypt the backup archives, though that would increase the resources used when creating the backups.

If you’re finding that the backups are particularly resource heavy, one thing to verify is to make sure that the backups are configured to use gzip, rather than bzip2.

Then, you could always pass in the “–fast” parameter to gzip. That will result in slightly larger backups, but it’ll use less resources while creating them.

You can set that parameter in System Settings -> Virtualmin Config -> Backup and Restore, in the “Extra command-line parameters for compression command” field.

-Eric

You can set that parameter in System Settings → Virtualmin Config → Backup and Restore, in the “Extra >command-line parameters for compression command” field.

In this field you may add also this gzip option:

–rsyncable

This make the tar.gz backup rsync compatible. Then you may use rsync to transfer to external server.

roberto

Do you guys have any rsync scripts to share?

I just whipped this up to be able to run the same script consistently on all servers, you’re welcome to it.

#!/bin/bash

Originally authored by Jesse O’Brien

#Set target server(s)
TARGET=
SRCDIR=
VERBOSE=0

run() {
parseopts $*
verbose “Source Directory: ${SRCDIR}”
verbose “Target: ${TARGET}”
verbose “Target directory on target: ${DIR}”
TARGETDIR="${TARGET}:${DIR}"
verbose “Full Target path: $TARGETDIR”
# rsync and cleanup any entries that don’t exist locally ($SRCDIR)
rsync -avhP --bwlimit=1000 ${SRCDIR} ${TARGETDIR}
verbose “Backup Complete”
}

Parse all arguments

parseopts() {
while getopts “h:t:v” OPTION ; do
case $OPTION in
h)
usage
exit 1
;;
d)
DIR=$OPTARG
t)
TARGET=$OPTARG
verbose “Target is: $OPTARG” >&2
;;
v)
VERBOSE=1
;;
s)
SRCDIR=$OPTARG
/?)
echo “Invalid Option: -$OPTARG” >&2
exit 1
;;
:slight_smile:
echo “Option -$OPTARG requires an argument.” >&2
exit 1
;;
esac
done

if [[ -z $TARGET ]] ; then
    echo "Somehow, you have no target set."
    exit
fi

}

Retained to ensure that additional debug info is always available

verbose() {
if [[ $VERBOSE -eq 1 ]] ; then
echo “*** $* ***”
fi
}

print all the options for this script

usage() {
cat << EOF
usage: $0 options
OPTIONS:
-h Show [H]elp (this text)
-s [S]ource directory
-d Target [D]irectory
-v [V]erbose
-t [T]arget (should be a fqdn (i.e. github.com)
EOF
}

main run loop

run $*