Contabo Object Storage

OS type and version Ubuntu 20.04
Webmin version 1.990
Virtualmin version 6.17-3 Pro
Related packages aws
Error — Perl execution failed

File does not exist: {
  "message":"API rate limit exceeded"
} at S3/ line 26.

I’m trying to connect to Contabo Object Storage which is a new thing they launched last week. I was apparently using the wrong credentials for some reason at first but now I can run the aws cli properly and everything appears to work. I can list buckets and do things through the aws cli fine. However when I try accessing it or doing almost anything through Virtualmin I get the above error. Sometimes it will list the buckets and sometimes it will give me that error. I can’t create a backup because it tells me the API rate limit exceeded. Is there some options I can change to make it not do whatever it’s doing to reach that limit?

Thank you!

Not sure but ?

My settings match the Contabo documentation at and I can see the buckets under VirtualminBackup and RestoreAmazon S3 Buckets without aws cli. However, backups fail.
For a test backup of 573,800 Kb I got “… upload failed! Invalid HTTP response: HTTP / 1.1 413 Request Entity Too Large”.
For another 5.059 Kb backup the error message was "… upload failed! Failed to upload information file : ". In this instance the file transfer was interrupted just before reaching 5 Mb.
Changing Upload chunk size to MB under VirtualminBackup and RestoreCloud Storage ProvidersAmazon S3Edit Cloud Provider was not helpful for me.

Does anyone have any suggestions?

I have that same error as well and have been working with Jamie to figure out a solution so that it doesn’t report that it fails. The backups simply aren’t uploading the dom and info files but are still uploading the actual important compressed tar.gz backup files for me.

It appears that the file names may get uploaded strangely according to this post where the person was using Linode object storage and having the same issue: How to debug failed backup - #11 by tomcameron

I also have set the size to 100. I have setup the aws command but had to do some other things like use an alias in ~/.bash_alias and set the region to EU in the ~/.aws/config and make sure it’s using the right credentials in the ~/.aws/credentials.


region = EU


alias aws='aws --endpoint-url'

S3-compatible server hostname:
Upload chunk size in MB: 100

Contabo Support writes me that Cloudflare is blocking files larger than 200 MB and Virtualmin seems not to do the chunking early enough.
They are in discussion with Cloudflare to improve the situation.

Can installing aws-cli, as I read in some posts from years ago, really help?

Ahh so they are using Cloudflare too then I guess. I wasn’t having issues with that when using chunks of 100 which I believe is the default maximum size of files that can upload through Cloudflare. I had issues with the original size or larger sizes but not 100 so maybe that’s due to their Cloudflare setup.

I couldn’t get it working at all without having the aws command installed but I don’t remember exactly what the issues were. Maybe that could be due to Virtualmin specifically having support for S3 built into it whereas Contabo is only S3 compatible and required me to make those changes I listed before it would work. The info and dom files still won’t upload so perhaps more changes to what I’ve done are necessary or perhaps there’s a bug somewhere. We’re still working on figuring that out.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.