Failed to connect to s3.amazonaws.com

Greetings,

I have been using s3 to backup 50+ virtual servers on a daily basis and now have an inconsistent issue. I perform a weekly full backup followed by six incremental backups through the rest of the week. Last week, a couple of days after the full backup, the incremental backups were ok for about 40 of the servers but the remaining servers returned this message.

… upload failed! HTTP connection to s3.amazonaws.com:443 for /my_bucket/2012-05-10/my_server.tar.gz failed : Failed to connect to s3.amazonaws.com:443 : Connection refused

Since this started, there has not been a complete full or incremental backup without failures. The number of failing file uploads has ranged from 3 to 30 of the virtual servers and with only a few exceptions, it is never the same virtual servers two days in a row.

This topic, https://www.virtualmin.com/node/25170, indicates that the backup will fail when the region is not US Standard. I’ve double checked and verified that the region is in fact set to US Standard.

This topic, https://www.virtualmin.com/node/23441, indicates that DNS issues may be causing a problem. Output from nslookup and dig follow below and look ok to my inexperienced eyes, but I’m confused on how a DNS issue would effect some but not all the files backed up.

This has been happening for about a week now and has been replicated outside of the cron job that has been running the backups successfully for many months. I have also tried a manual backup to S3 as well as a new backup schedule with a new key pair with the same results.

I have also asked this question on the Amazon S3 forums but have not received any response there yet.

Any helpful guidance or suggestions would be much appreciated.
Thanks,
Hugh

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.4 <<>> s3.amazonaws.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42593
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 10, ADDITIONAL: 7

;; QUESTION SECTION:
;s3.amazonaws.com. IN A

;; ANSWER SECTION:
s3.amazonaws.com. 56 IN CNAME s3.a-geo.amazonaws.com.
s3.a-geo.amazonaws.com. 274 IN CNAME s3-2.amazonaws.com.
s3-2.amazonaws.com. 34 IN A 207.171.187.117

;; AUTHORITY SECTION:
amazonaws.com. 880 IN NS r4.amazonaws.com.
amazonaws.com. 880 IN NS u1.amazonaws.com.
amazonaws.com. 880 IN NS r3.amazonaws.com.
amazonaws.com. 880 IN NS r2.amazonaws.com.
amazonaws.com. 880 IN NS r1.amazonaws.com.
amazonaws.com. 880 IN NS u5.amazonaws.com.
amazonaws.com. 880 IN NS u4.amazonaws.com.
amazonaws.com. 880 IN NS u2.amazonaws.com.
amazonaws.com. 880 IN NS u6.amazonaws.com.
amazonaws.com. 880 IN NS u3.amazonaws.com.

;; ADDITIONAL SECTION:
u2.amazonaws.com. 446 IN A 156.154.65.10
u4.amazonaws.com. 915 IN A 156.154.67.10
u6.amazonaws.com. 1017 IN A 156.154.69.10
r1.amazonaws.com. 1235 IN A 205.251.192.27
r2.amazonaws.com. 6016 IN A 205.251.195.199
r3.amazonaws.com. 6156 IN A 205.251.197.41
r4.amazonaws.com. 1975 IN A 205.251.198.134

;; Query time: 16 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Fri May 10 16:55:31 2013
;; MSG SIZE rcvd: 374

Server: 127.0.0.1
Address: 127.0.0.1#53

Non-authoritative answer:
s3.amazonaws.com canonical name = s3.a-geo.amazonaws.com.
s3.a-geo.amazonaws.com canonical name = s3-2.amazonaws.com.
Name: s3-2.amazonaws.com
Address: 207.171.189.80

make sure your bucket name is unique across the entire S3 region. bucket names are not case sensitive.

Thanks for the clue eddieb. My bucket name was unique, but it did send me digging into the S3 documentation for bucket naming rules.

http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html

While it says that for the US Standard Region that bucket names can be as long as 255 characters and contain any combination of uppercase letters, lowercase letters, numbers, periods, dashes and underscores, the naming rules for all other regions are more stringent.

I renamed my buckets using periods instead of underscores, adhering to the more stringent rules, and ran the full backup manually without any errors. So while I was using the US Standard Region, my use of underscore in my bucket naming may have been the issue that was causing a failure to upload all the files. I will monitor for a while and report back results.

Hugh

For anyone who is having the same problem, the above fix is currently working.

A full week of automated incremental backups and a weekly full backup successfully completed without any errors, so I would suggest adhering to the more stringent rules for naming buckets, regardless of the region selected.

Hugh