I have scheduled backups set to run nightly to local disk, then additional destination on SFTP.
Of the 5 websites on the server, only 1 repeatedly fails to back up, it’s the one that is over 100GB in size… The email just says “partially completed” and doesnt really give me any details of what failed? Same with the backup log, just vague things like “failed” with no details…
I have tried changing webmin’s tmpdir to a secondary nvme, but it still fails.
Sounds like the backup is hitting resource limits. Check disk space, timeout settings, and memory limits. Splitting the backup into smaller chunks or using incremental backups might help!
It could be partition space. Note I had to ‘bloat’ /usr to install Discourse. It wouldn’t compile I think? I don’t think partitions are as important with the newer file systems.
I worked at a place with 80 to 90 servers. We did an rsync to a server and the backup actually was run on that server IF I remember correctly. The rsync was a nice place to get stuff quickly if needed.
well, none of this is helping so far… wish i could see an actual error instead of just “failed” with no details. kind of makes Virtualmin useless for large sites
I don’t know what temp space may or may not be used here. If it is /tmp that us under / then this is a problem. You could run watch df -h while running the backup. This would tell you if it is a disk space problem. I don’t really know how tar works under the hood. If it uses the source space to do the build, then that is where the problem is.
But, this is just a guess obviously.
I don’t know if the command line version has a ‘verbose’ option but this might help to watch the backup process and see if you get a more specific error.