Scheduled backups failing large websites

I have scheduled backups set to run nightly to local disk, then additional destination on SFTP.

Of the 5 websites on the server, only 1 repeatedly fails to back up, it’s the one that is over 100GB in size… The email just says “partially completed” and doesnt really give me any details of what failed? Same with the backup log, just vague things like “failed” with no details…

I have tried changing webmin’s tmpdir to a secondary nvme, but it still fails.

Sounds like the backup is hitting resource limits. Check disk space, timeout settings, and memory limits. Splitting the backup into smaller chunks or using incremental backups might help!

Have you checked the logs?

where would i find these settings? the server has 2 2TB nvme drives, and 64 GB of ram, it’s not running out of resources on a 140GB site backup.

Also, we are not doing incremental backups. Too much effort to restore multiple days if something went wrong.

It could be partition space. Note I had to ‘bloat’ /usr to install Discourse. It wouldn’t compile I think? I don’t think partitions are as important with the newer file systems.

I worked at a place with 80 to 90 servers. We did an rsync to a server and the backup actually was run on that server IF I remember correctly. The rsync was a nice place to get stuff quickly if needed.

well, none of this is helping so far… wish i could see an actual error instead of just “failed” with no details. kind of makes Virtualmin useless for large sites

why would partitions be an issue when i am telling it to use a drive mounted as /backup for this backup, which is 2TB

my partitions are…
/ 70GB
/home 1.77TB
/boot 1GB
/backup 1.78TB

i did set webmin’s tmp directory to /backup/tmp to see if it may be /tmp running out of space, but it still fails exactly the same way.

I don’t know what temp space may or may not be used here. If it is /tmp that us under / then this is a problem. You could run watch df -h while running the backup. This would tell you if it is a disk space problem. I don’t really know how tar works under the hood. If it uses the source space to do the build, then that is where the problem is.

But, this is just a guess obviously.

I don’t know if the command line version has a ‘verbose’ option but this might help to watch the backup process and see if you get a more specific error.

it seems to work fine if i run the backup manually, it’s just the Scheduled Backup that fails, without a useful error

Well, that’s unexpected. Are all servers backed up on the same schedule or are they all set differently?