Clustering Virtualmin

You may be interested in this thread as well…in particular, Jamie’s response to this feature request…

http://www.virtualmin.com/forums/virtualmin/clustering-virtualmin.html

I am not sure what you mean by "/etc/apache/sites-available" and "sites-enabled". What is your linux distro? I am using CentOS. There is a /etc/httpd/ folder with config files in there for global and for clients and they are in /etc/httpd/conf/.

Also - are you saying that when you run the synchronization on your front-end webservers that the /home/[user] directories get synced as well? There must be something wrong with my install, I’ve tried this and found a few bugs.

http://www.virtualmin.com/index.php?option=com_flyspray&Itemid=82&do=details&task_id=4097&Itemid=82&string=&project=1&search_name=&type[0]=&sev[0]=&pri[0]=&due[0]=&reported[0]=&cat[0]=&status[0]=&percent[0]=&opened=merlynx&dev=&closed=&duedatefrom=&duedateto=&changedfrom=&changedto=&openedfrom=&openedto=&closedfrom=&closedto=

Anywho - thanks a ton for posting this. It would be awesome to have a section in the documentation or on the wiki that covers different scenarios of this sort of implementation.

Oh yes, Debian does this with it’s apache2 package. They’re just basically directories of symlinks to the config files you want to “activate”. Sites-enabled is then read by apache.conf (httpd.conf).

/etc/apache2/conf.d
/sites-available
/sites-enabled (symlinks to configs in sites-available)

The way I have things set up, there is no synchronization. Both webservers mount the same web content, stored on a DRBD-based nfs cluster. I guess I forgot to mention that the Developer’s server mounts this content under /home, and the webservers mount the same content under /var/www. (this makes absolute paths a bit confusing though)

Content Server A _shared IP --> Mounted on Webserver A __ load balanced
Contnet Server B / --> Also on Webserver B /
–> Also on developer server <-- access via VPN

Brent,

Are you using virtualmin for your hosting environment?

I am kinda confused. In the context of Virtualmin on CentOS there is no “enabled-sites” like you mention in Debian, and I am not sure if I understand how that would paradigm with CentOS’s configuration with Apache in the context of Virtualmin handling the /etc/httpd/conf/httpd.conf file and the VirtualHost entries there.

Which machine is virtualmin installed on?

You have two front-end webservers mounting the same content directories from one development server (which is clustered - for failover). That much I think I get.

#1 Development server --mounted to -->Server A
|| --mounted to -->Server B
||
#2 [drbd clustered Dev server]

So if Server A goes down - Server B will "fail over."
If #1 goes down, #2 will fail over.
In both instances, vise versa.

Do you run virtualmin on the Development Server then?

Where do you run bind? What does your bind config look like here? Do you route all FTP requests to the Dev server as well? I notice you put postfix config on the webserver(s) A & B - does this mean that mail is routed through them but ultimately lands in the "Development" server box?

What about usermin and email sending/receiving?

I am just trying to understand how you got all the pieces to fit together…

Thanks for the dialogue - it is very helpful!

CentOS there is no "enabled-sites" like you mention in Debian, and I am not sure if I understand how that would paradigm with CentOS's configuration with Apache in the context of Virtualmin handling the /etc/httpd/conf/httpd.conf file and the VirtualHost entries there.

Virtualmin works with the OS, not separately from it. If the OS keeps things in separate VirtualHost files (as is the case on Debian and Ubuntu), Virtualmin does, too. If the OS keeps everything in one httpd.conf (as is the case on CentOS), Virtualmin does that.

Don’t let a minor semantic difference between two systems throw you–they’re doing the same thing in slightly different ways.

As for some of your questions, I don’t know how Brent does it (and there are many ways, with their own positives and negatives). But, here’s how I’d probably do it:

I’m assuming either server can take over the IP(s) from the other, in the event of failure. Everything gets a lot harder if you can’t take over the IP.

Where do you run bind?

On both systems. You need two anyway. So, have one slave to the other, and vice versa. This is no different than what the documentation for cluster slave DNS autoconfiguration tells you to do…you just do it twice, once on each machine. DNS is, by far the easiest service to provide redundancy for. Don’t let it intimidate you–it’s designed from the ground up to be highly redundant. And Webmin takes that one step farther by being able to easily promote a slave zone into a master zone, if you lose your server A and need for server B to become the master for a week while you get server A back into production, or whatever.

What does your bind config look like here?

Follow the autoconfiguration guide for both server A and server B, and you won’t need to care. Really don’t fret over BIND. It should just work.

I don’t know about the other stuff, as I’m not remembering what all this thread was about. But I just wanted to point you in the right direction on BIND, because I could tell you wanted to make it way more complicated than it needs to be. :wink:

I have VirtualMin running on my “Developer’s” server. This is the only part that I do not have redundant servers for (because everything on this server can afford to be down for a reboot here and there)

So I have the Development server set up pretty-much as a standalone VirtualMin server, with Virtualmin, Bind, FTP, popa3d, and Postfix running there. Then I export the apache configs from this server to my webservers, and have created a custom /etc/init.d/remote-apache script to start and stop apache on those servers from the Virtualmin server. (All my servers are running Debian Etch)

I think you understand the web content part, so I won’t go over that again here.

@Nick White,

Thanks for this post, I made a graph of the set-up and nearly have a path of execution. I have four servers like you (A,B,C,D)

I have a few questions. Is DRBD necessary for the NFS back-end servers? (C & D) Could I use an rsync script running from the "master" NFS server © to the "slave" nfs server (D) every minute or so?

Currently, the “master” NFS is the production machine running 60+ websites (server “C”). It’s a bit overworked, and lags - especially when there is high demand for media (video/audio) content.

I am thinking to set it up where this master NFS © is the only way users can add/edit their sites, and the rsync script runs to push from the "master" nfs (which acts as the webmin/virtualmin box as well) to the "slave" nfs. Does this sound feasible? Can you see any caveats with C->rsync cron->D?

C–exports nfs–>A and B (front end servers)
D–mirror of–C

The front end servers - is one your "primary.nameserver.com" (A) and the other your "secondary.nameserver.com?" (B) Jamie suggested I set (A) to have (B) as a slave DNS in virtualmin and (B) to have (A) as a slave in virtualmin. Is this what you mean by "Domain Transfers?" Would it be OK to have the main NFS/webmin server © configured as "primary.nameserver.com" or would all DNS need to go to the front-end servers?

Because of the nature of this configuration, I wanted to also ask if there are any “gotcha’s” that I should be on the lookout for in this context? You mentioned the Webmin cluster and the need to refresh all users when a new one is added to the main NFS server ©? I wonder if there is a way to automate this as part of the process that virtualmin runs when a new user account is created? As well as the apache and postfix (and bind, I would assume?) restarts necessary…

Thanks for your excellent example and any advice given will be greatly appreciated!!!

I guess there are two reasons for the backup content server. One would be for high-availability, and the other would be for data redundancy.

Our setup is designed for high-availability, and hence I can failover the content server and reboot it for updates at any time, without concern that the websites will experience any downtime, or that the data might be out-of-sync. My understanding is that even if a client is in the middle of deploying a site during my failover, it will continue on the second server with only a bit of a hiccup. I think you would lose this with the rsync model you mentioned.

However, if your goal is one of data redundancy, perhaps rsync would be better. In this model you would only sync the servers often enough to keep the data relatively current (ie. daily / weekly), but infrequently enough that if your server were hacked, your data deleted or had a hardware failure, these undesirable changes would NOT be instantly replicated to the other server. In our case, we depend on daily tape backups for this.

As for DNS, we just use a single VirtualMin server which also acts as our DNS server, so replication is not a factor. This server, however, is last in a chain of Active Directory DNS servers, and not queried directly by clients, so everything is theoretically cached on the AD servers which reduces the load and increases the redundancy of the DNS system.

Currently I’ve got the four servers in the initial stages of configuration.

serverA and serverB are the web content servers, sharing 1 public IP that is the fulcrum of load balancing.

serverC and serverD are the nfs file servers for serverA and serverB.

serverC is the primary file server, this is the virtualmin enabled machine which all users changes are made and all machines are clustered with (users syncronized). serverC exports the shares for content and postfix to serverA & serverB. serverC’s content is rsynced to serverD. serverD takes over serverC’s IP when server serverC fails.

Right now, in the context of serverC’s shares over nfs4, apache is failing to start because in the CentOS implementation of httpd, there are symbolic links to /var/logs/httpd, /usr/lib64/httpd/modules, and /var/run, and these are not “seen” by a nfs4 mounted export with these settings in the clients fstab:
auto,rw,async,_netdev,proto=tcp,retry=10,wsize=65536,rsize=65536,hard,intr 0 0

So - researching other ways to make sure that when Apache starts, it sees the right directory. Either going to hardcode the references in the httpd.conf file or find a setting/method for nfs4 to resolve links. Thoughts?

Hmmm.

I don’t know how you’re doing your nfs, but for me:

  • My frontend servers are configured as apache servers, and they will run standalone, without nfs, with all log/module/etc directories intact. (of course there may be errors about missing content)
  • I mount a single "content" share via nfs on each of my webservers which contains only the various DocumentRoots for the various sites. This is the redundant part that fails-over between servers
  • I also mount the /etc/apache2 directory from the NFS server so that both webservers use the same configs - but I think this is optional.
  • I am also currently logging to mysql instead of to disk. The module for doing this is a bit flaky, so I wouldn’t recommend it - but we’re transitioning between webfarms, so this is an easy way to consolidate all our logs. I have done this in the past with apache logging to the local machine, and it works fine like that.

Note: in order to make virtualmin play nice with multiple apache servers I had to write a script: /etc/init.d/remote-apache, which restarts apache on both frontend servers in turn.

If you mount the etc/apache2 directory from the NFS server, won’t you have to restart all webservers that share the config, when the config file changes? The config file will change everytime you add a new Virtual Host via VirtualMin?
If you have to restart, then how do you do it?

Also each VirtualHost has an IP eg.

<VirtualHost 10.20.144.133:80>


How do both servers have same IP ?

thats i am looking for too, thats why need to be name based virtual server. is anybody there to help how to customise the virtualmin so that it will create name based virtual hosting ?

Go Webmin > Virtualmin> module config replace shared ip with *

Hi I realise this thread is pretty old, but I’m looking also for a 2 server solution to failover, mirroring and HA…

I found this guide which looks like a pretty awesome solution to give 2 servers failover AND load balancing!

http://gcharriere.com/blog/?p=339

My problem is I already have my server running and I’m now wanting to add a second server (which is exactly the same hardware etc) to provide the failover, load balancing and HA/mirroring…

So I don’t really know how to do any of these guides without mashing up my server already in production use! Or how to mirror/sync everything properly including Virtualmin/Webmin/Usermin…

Did anybody get anywhere over the years of this discussion with a 2 server setup? And if so I’d be happy to pay for some help to get this to work.

Thanks for any help at all.