Every now and then I need to increase storage space on my servers because Virtualmin has not enough space left to make back-ups. The frustrating thing is that I sometimes need to have 50-150GB available only to back-up a virtual host that has a big database and/or many files that cause the back-up to be up to 150GB (uncompressed). This even though the compressed file that is stored on the external FTP location doesn’t need that much space because the compression takes of many GB’s (i.e. 100GB mysql database turned in 3GB backup).
Feature request (or question): can you make it possible to let Virtualmin already compress the files while performing the back-up and gathering the data? I.e. doing a mysqldump and pipelining the output through gzip before storing it in a file? And/or doing something like this with the file-based backup so that the files are already compressed before they are being stored in the tmp directory?
Based on such a method there’s much less free disk space required in the Webmin tmp-directory while creating back-ups of bigger virtual hosts.
Have a look at System Settings ⇾ Virtualmin Virtual Servers ⇾ Configuration: Backup and restore and Compress MariaDB backups option, and did you try bzip2 compression format? Does it make any difference for you?
Also, are you backing up to a directory with one file per a domain or to a single file? If the latter, what is the full path and a file name that you are using in a backup?
I’ve changed the compression format from gzip to bzip2 now, but I don’t understand how this would help as the problem seems to be during the period befóre compressing the back-up.
I’m afraid it didn’t help. I think it’s somewhat expected because compression happens after the mysqldump, I think this should be done by pipelining to improve this issue.