Distribute domain services over multiple servers

After a kernel upgrade that went wrong, I currently wait for the emergency tech support that will manually reset my server. Too bad.

Even worse, this affects all of my domains on that machine. OK, I’ve got secondary MX in place so no mail is lost during the lightout, and DNS is spread over multiple machines and networks, too.

But user services like IMAP and SMTP are gone. Most of them require authentication of some kind, which leads me to this:

I’d like to spread services over multiple machines. One for mail, one for web… you get the idea. However, It’s not feasible having separate stores for user credentials. In my opinion, one system should be “master”, replicating it’s user/group/domain information towards the slaves. So, even if the web server dies, users can continue to use the mail server just like before.

Any chance I could do that with Virtualmin? Regards,

Christian

Christian,

I am working on something similar to what you describe, but coming at it from a different direction.

The goal is still to provide seamless services to clients, but rather than distribute service across multiple servers (with multiple points of failure) I am working on a “hot backup” system to provide near seamless access.

I have DNS running outside of Virtualmin and all of my servers on a separate DNS hosting site (sitelutions.com - I HIGHLY recommend them); none of my servers are running bind. I have my main server in a DC, and a backup VPS in a different DC, as well as a “last ditch” server running in my office on a dynamic IP.

All servers are set up from CentOS 5.4 minimal install followed by the install.sh from Virtualmin. My “main” server runs Webmin/Virtualmin and provides all services, in the normal state. My server is monitored, and I am alerted on my cell phone of any problems. If it is something “major”, I just re-point my DNS at sitelutions to my backup VPS, restore my Virtual Server backups on that VPS (see step 3, below - usually not necessary unless a recent change has been made), restore my wordpress databases on that VPS (see step 2 - below) and everything is pretty close to seamless (still working out a few bugs - see caveats).

To make this happen I:

  1. Run a self-written rsync cron job every hour to backup /home to my backup servers (backs up mail, static web sites, etc.)
  2. Run a self-written cron job every hour to backup my mysql and wordpress databases, as most of my hosted sites are running wordpress and I don’t know when a client might update their site
  3. Run a daily “old style” backup of my Virtual Servers into /home/backups/virtualmin (which gets rsync’ed in step 1)

The caveats are:

  1. The VPS and backup server are “dumb” - I don’t EVER install software directly on them or do any work on them - I also synchronized /etc/passwd and /etc/group from main server to backups
  2. I am fighting a problem with the Virtualmin Server backup that doesn’t restore my FTP users
  3. I have to have control of all domains I host so I can update the DNS at sitelutions.com
  4. Outlook and Thunderbird are pretty slow to refresh their DNS cache, even though I use a 30 sec TTL
  5. I’m still working on getting the process down so I can make a “virgin” server a hot backup - right now it requires substantial manipulation of /etc files, but mostly because I see fail2ban as a requirement, and want to maintain consistent whitelists, postfix setups, etc.
  6. I have given each server a different hostname…wonder if using a single hostname would make things easier with the configuration files?!

I’m sure the pundits will come out and tell me about how there are better ways to do this, and I’m sure there are, and I’m also open to suggestions…but I’m relatively new to linux, and in lieu of someone pointing me in a better direction, this has worked remarkably well for me.

Anyway, hope this helps you get an idea of how I have solved the problem you’re having.

  • Acorp

www.acorp.net