Very soon, Chrome and Firefox will be displaying warnings on non-SSL pages where there are input fields. Seems like this is an ideal solution for letsencrypt, but isn’t it true that each cert has to have it’s own IP address? I suppose that when you are creating the letsencrypt cert you could identify every subdomain used for that particular domain (I’ve got one domain with 15 legitimate subdomains), but what about if one of the sites is actually a true payment/transaction site which already has a commercial "seal"ed cert installed on it? And then, what happens when we add a 16th site that needs a cert? Get a whole new cert from letsencrypt that includes all 16 listed?
I mean, short of setting up each site with it’s own public IP address…
And yes, I know that letsencrypt will be issuing wildcard certs starting in January of 2018. But that’s 4 months away, and the warnings start in October.
Seems like with Virtualmin being great at handling virtual hosting, and letsencrypt being integrated into it, this would have come up already. But I’ve searched the forum and not found the question even asked.
Any direction would be appreciated.
HI, as you said “but isn’t it true that each cert has to have it’s own IP address?” - that is false and its totally myth! You dont have to have its own ip per domain to setup ssl cert. Hosting companies uses this to just milk money off you. For example I have one domain, lets say domain.com - so I’ve done lets encrypt for it and then have each subdomains like paste.domain.com and billing.domain.com - ive done lets encrypt for each of them too… they are on same IP, no issues. Ive done lets encrypt for each separately so it would be easier to manage and troubleshooting lets encrypt if anything will go wrong in renewal process or perhaps I would like to just turn ssl off just for one domain/subdomain.
I would love for this to be true, but my experience suggests it’s not. But maybe I’m missing something?
I actually tried it, which is what led to my post. I have a single server with a main domain (say, society.org) and 14 subdomains under it. At this time, only one subdomain “payments.society.org” has a cert (godaddy). I attempted to create a letsencrypt cert for society.org and phpmyadmin.society.org, and while the Virtualmin function executed successfully and VIrtualmin reports that those subdomains are using letsencrypt certs, when I try to browse to those sites, it is the godaddy cert that loads.
Researching why this is, I have found numerous references to how apache loads ssl and certs, and that it loads the cert based on the IP address prior to evaluating which virtualhost to use, which explains why we are told you can only use one cert per IP address.
Maybe letsencrypt has a way around this IF the only certs used are for letsencrypt?
OK, this is very interesting. I have a completely different server (however, they are both on Amazon and based on the same image - Ubuntu 16.04) that had NO certs on it until now. It has a similar main domain and subdomain layout, just not as many. I added SSL and installed a letsencrypt cert on two of the subdomains, and it works! I’ve verified that each subdomain is indeed using it’s own designated cert.
Can anyone explain why this works but my original attempt didn’t? Is it indeed something letsencrypt does internally?
You’ve likely got one or more VirtualHost sections with *:443, while others have 192.168.1.1:443 (where 192.168.1.1 is your IP). That can cause weird behavior…Apache understands how it decides what to serve, but nobody else can.
If it’s not that…I’m not sure. Do you have a “default” site setup, where that original SSL cert is configured outside of a VirtualHost section? If so, that’d explain it. You really just have to stop thinking of a “default” site having any meaning in Apache when using it in a name-based virtual hosting configuration. It’ll just confuse you.
“Maybe letsencrypt has a way around this IF the only certs used are for letsencrypt?”
Virtualmin can use either the website or the DNS record to validate a certificate. But, the problem you’re having is not with Let’s Encrypt, it is with your Apache configuration.
Thanks. My habit - possibly a bad one - is to configure my virtualhosts as *.443 and *:80 since I’ve never needed more than IP address on a server so far, and because I am on Amazon where IP addresses can change frequently. On that last point, I don’t believe this is the problem is used to be. I think “in olden days” Virtualmin used the AWS “private” IP, which changes frequently and can’t be controlled, to using the elastic IP which will only change if I do it. Do I need to standardize on a specific IP address?
Also, I don’t use internal DNS. I use an external 3rd party service.
thanks this resolved a problem I was having.
I’m trying to sort out what the best way to deal with that is; one of our mirrors is at Scaleway, and it has the same problem (private IP changes on reboots). It’s annoying. I have been experimenting with both solutions (using * and using the IP and updating it on every reboot). Both work. Obviously, * is less hassle. I have not, however, tried having multiple SSL certs.
But, I can’t think of why it would go wrong as long as all of your VirtualHost sections use the same combination (so, either all on * or all on an IP), and as long as you don’t have any SSL directives outside of VirtualHost sections.
Just to confirm, I ignore the default host. I’ve not added any directives there at all.
I noticed that on CentOS, there is a default SSL host configured in
I’ve added a fix for that in the next version of the virtualmin-config package rolling out later today, but you can fix it manually by editing that file and commenting out the entirety of the
<VirtualHost _default_:443> section (put a # in front of every line that doesn’t already have one all the way until the closing
</VirtualHost>. And, then restart httpd.
I use Let’s Encrypt, and have SSL on all my domains (and only 1 ip address).
“Just to confirm, I ignore the default host. I’ve not added any directives there at all.”
While this is definitely true, I decided a couple of hours ago that I should grep for ssl stuff. I found similar in Ubuntu’s default_ssl.conf file. So you and I were looking for the same thing at the same time. I haven’t tested disabling it yet.
OK, I started from scratch and added SSL to one of the sites on this server and set up a letsencrypt cert on it. It works. So to summarize:
- I commented out everything in default_ssl.conf
- transaction site has a Godaddy cert and is configured as *.443
- content management site has a LetsEncrypt cert and is configured as *.443.
Both sites load the appropriate cert.
( apparently the CMS site is including some resources using http:// and this is showing it as not secure, but this is as it should be until that’s fixed. The cert however is loading correctly).
Before I apply ssl and a letsencrypt cert to the other sites, I will do the next most challenging thing, and that is apply a godaddy wildcard cert to one of the domains that needs it, an SSO server where each of our members has their own subdomain. Wishing LetsEncrypt was ready to do wildcards NOW…
That one should have been disabled by default; it only applies to your Apache configuration if it is in /etc/apache2/sites-enabled. On my test systems, I don’t see it enabled. I’ll go ahead and disable it during our configuration step, but it shouldn’t be necessary.
You can disable it by running:
# a2dissite default-ssl.conf
But, if you don’t have that enabled, your problem is something else.
This was the solution about my problem !
Just remove the … in / etc / httpd / conf.d / ssl.conf
then restart Apache !
Thanks you, all is right now…