Testing OVH S3 object storage

OS type and version Debian 11
Webmin version 2.101
Virtualmin version 7.7
awscli 1.19.1-1

I’m testing the S3 OVH Object Storage, I was able to replace the current provider (Scaleway) on one of my Virtualmin VPS’s with a few clicks and I started testing some virtual hosts backups.
It seems that the backup file is sent no problem (I didn’t try yet a restore) but I’m getting an error each time:

Uploading archive to Amazon's S3 service ..
    .. upload failed! Failed to upload information file : The request signature we calculated does not match the signature you provided. Check your key and signing method.
.. completed in 2 minutes, 33 seconds

Before contacting OVH, I want to make sure I understand the message and the process properly … I guess this means that some kind of signature calculated by Virtualmin does not match the corresponding signature calculated by OVH … ? Virtualmin does return an error, does this also means that upload was not completed or just not properly signed ? Any idea of some info I could give OVH, for example the signature calculation method, if done by Virtualmin ? or am I able to assert that the signature calculation is made by awscli ? or am I completely wrong about this process in Virtualmin ?

Thanks ! lot of questions …

I just opened a ticket with OVH, maybe they do have an idea …


The error message “The request signature we calculated does not match the signature you provided. Check your key and signing method” typically indicates that there’s a mismatch between the credentials or method you’re using to sign the S3 request and what the server expects.

Here are some common troubleshooting steps:

  1. Check Credentials: Make sure that the Access Key ID and Secret Access Key are correct.

  2. Time Sync: Ensure your system clock is synchronized. AWS S3 request signing is sensitive to time discrepancies.

  3. Region: Make sure you’re using the correct S3 endpoint for your bucket’s region.

For #3. Region, please check this thread:

Very interesting thread on GIT !! I have the exact same problem and I see the exact same error on top:

Warning! The AWS command is installed, but not working : Could not connect to the endpoint URL: "https://s3.gra.amazonaws.com/"

Like hamidamadani I upgraded my awscli to 2.13.14 and like him the error changed to

An error occurred (AuthorizationHeaderMalformed) when calling the ListBuckets operation: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'gra'

In the first error the endpoint seems to be a combination of AWS url “… amazonaws.com/” and the one from OVH “s3.gra.io.cloud.ovh.net”, taking “s3.io.gra” from the later and putting it in front of the AWS one … weird.

“sbg” and “gra” are regions within OVH, they do not exist with AWS so like hamidamadani I added region = gra below the proper profile [profile xxxxx] in the aws config file and … it works, all 3 files are properly uploaded no error !!

So it seems that Virtualmin does not pass correctly the endpoint to awscli or something like that … My previous provider was Scaleway, the difference is that Scaleway has a “par” region (Paris) and that “par” does exist within Amazon AWS as well, maybe this does explain why it did work with Scaleway and it does not with OVH (“gra” stands for Gravelines where they have a huge datacenter, “sbg” for Strasbourg, both do not exist in AWS).

Thank you very much for pointing me in the right direction, of course it would be nice for this to work ok right out of the box …

Thanks for the feedback! Great to hear that it worked for you.

I linked @Jamie to this thread too. I will cross link it to GitHub.

@Jamie, why not list all available regions using aws command with describe-regions sub-command, and then add it to the page with the dropdown menu (i.e. select), so it will always set the region?

Yes, we could do that. Although in this case, I think the real issue is that the aws command is still trying to use the wrong region, because it gets saved in /root/.aws/config or /root/.aws/credentials .

The work-around for now is to edit those files and remove any region lines.

Total noob question: is there a way to fetch available regions for a specific provider ?

For AWS EC2 the following works for me:

aws ec2 describe-regions --query "Regions[].{Name:RegionName}" --output text

Although, fetching available regions for a specific cloud provider generally depends on the provider’s API or SDK.

I guess you should at least assume that the provider has a S3 API … I will ask OVH what would be the aws query to get available regions.

On a side note: I made other tests, upgrading awscli to v2 is not necessary, it’s really editing the config file that does solve the problem.