| SYSTEM INFORMATION | |
|---|---|
| OS type and version | Debian 13 |
| Webmin version | 2.6 |
| Virtualmin version | 8.0.1 |
| Webserver version | REQUIRED |
| Related packages | SUGGESTED |
I observed that during backup creation to a remote SSH/FTP/FTPS target, Virtualmin first creates the backup on the local filesystem and then uploads it into the remote destination server.
That means a considerable amount of free space (up to 100% of used space) is needed to create backups. I am aware that virtual servers can be uploaded individually but if there is a single virtual server with large amounts of data (40GB Nextcloud files in my case), that amount still needs to be free on the local storage during backup creation.
So why is the remote SSH/FTP/FTPS target not just mounted temporarily and the backup created directly onto the remote server instead of writing the backup files locally and then uploading ?
There are well-established CLI tools to accomplish this: SSHFS, RCLONE and CURLFTPS.
To work around this issue, I am currently using my own shell script to backup:
-
Mount remote folder:
sshfs -o IdentityFile=${KEY_FILE} -p ${REMOTE_PORT} ${REMOTE_USER}@${REMOTE_SERVER}:${REMOTE_PATH} ${LOCAL_PATH} -
Write backup directly to the mount point:
virtualmin backup-domain --dest "${LOCAL_PATH}/backup--%Y-%m-%d--%H-%M" ${BACKUP_OPTIONS} -
Unmount
fusermount -u ${LOCAL_PATH}
This approach has several advantages in my eyes
- It does not require a lot of free local storage during backup creation
- It’s faster because backup creation and upload happen at the same time
- It could even simplify Virtualmin code because after mounting the remote folder, restoration and deletion commands are the same as using the local filesystem (see my other ticket in regards to backup deletion)