How to debug failed backup

I am getting frequent failures in the backups when uploading. They are reporting

"Uploading archive to Amazon’s S3 service …
… upload failed! Failed to upload information file : "

About 50% of the time. This is frustrating as I really can’t figure out why this is happening. This log info is rather useless - what I really need to know is why the upload filed. Did it time out? was there a response from the server? what was that response?

I should note that I’m not actually using AWS, I am using Linode Object Storage which supports the S3 protocol. As I said, about 50% of the time it works fine, but it frequently fails. So is there any way to get verbose logging in Virtualmin so that I can help diagnose this further?

Do you get any logs from the storage server?

Nothing at all, the UI provides no way to look at past activity. So I would assume that it sends messages to the client and they must be visible/stored somewhere.

Have you tried running the backup from the Virtualmin CLI? That’s usually how I like to troubleshoot things. Some errors can get lost in the web UI (though that may be less likely now). Run it in a screen or tmux if logged in over ssh, so that if your session times out waiting on big files to do stuff, you can re-attach.

1 Like

Joe, not yet, I’ll try that. Is there a CLI way to invoke a preconfigured scheduled backup or do I need to manually invoke it again? and if I need to do it manually, can I make use of the same S3 configuration that the scheduled backup uses so they are identical?

OK so I ran the command from the CLI and got identical feedback:

"Uploading archive to Amazon’s S3 service …
… upload failed! Failed to upload information file :

Backup failed!"

No clues at all. I can see in the storage that it successfully created the folders but no files uploaded.

OK so it looks like the .tar.gz files are uploading but the other files (.gz.info, .gz.dom) are not so it can’t be authentication and these are tiny files so it can’t be file size.

OK after a week of testing I’m not further on this one. I’ve created multiple buckets at Linode and multiple backups from various Virtualmin machines and ever single time, the .tar.gz files are uploaded correctly but the .gz.dom and gz.info files fail to upload.

The major issue here is the total lack of logging. I get absolutely no explanation from the Virtualmin scripts or Linode. The Linode team have been looking at this now for over a week and they claim they see nothing wrong, no errors and the fact that I can successfully upload these small files via cyberduck suggests that the bug lies with Virtualmin.

Is there any way I can add some low level logs to the Virtualmin scripts so that we can see the exact server responses when these files are being uploaded?

I’m not sure if there are any additional options for getting more logging output, as we’re just calling out using Perl S3 modules for this functionality, so Webmin logging won’t have much control over it.

We’ve also never tried it against Linode, so I don’t have any experience making it work with it, and I don’t know how compatible it is with AWS S3. But, I may be able to try it out in a few days, once I get some fires put out.

Joe,
Thanks, any help would be appreciated.
I’m not great with Perl but I reckon I might be able to do a bit of testing if you could point me in the right direction. I’ve got nowhere with Linode on this but to be fair to them, I seem to be able to upload with other client’s without problems.

@Joe I just got this back from Linode

" We are unable to share our full internal logs but one thing I’ve noticed today in our logs is that your .info .dom files are uploaded with the user agent “libwww-perl/5.833” and the filename looks to being improperly generated(based on the .info/.dom examples you’ve provided):

PUT /virtualminbackup/mebbin%2FSaturday%2Fweightlossinabox%2Ecom%2Eau%2Etar%2Egz%2Einfo HTTP/1.1" 403 198 "-" "libwww-perl/5.833

vs

PUT /virtualminbackup/mebbin/Saturday/weightlossinabox.com.au.tar.gz HTTP/1.0" 200 0 "-"

So perhaps there’s a bug in the Perl there or the .info/.dom files are just handled differently – I’m not familiar with Virtualmin but their Support team may be able to provide more details."

So it seems to be that the info files have some kind of URL encoding when the API is called but the main tar file does not - that looks like something significant.

@Joe, any update on this. I’ve not had a successful backup of my servers now for about 4 weeks and getting pretty concerned.

Not having backups will make you cry.

In the meantime, try a different protocol, and/or rent a cheap storage VPS for a while, or… :slight_smile:

@Joe, is there anybody I can pay to fix this? I really need this sorted urgently and all indications so far are that it’s somehow related to how Virtualmin encodes the URLs for the backup info files? Please let me know what I can do to escalate this?