Clustering Virtualmin

In order to allow virtualmin to be used across multiple hosts do I need to worry about any files other then those within /etc/webmin?

Also how would I license this?

I run a cluster with Virtualmin. The way I do it is to move all shared configuration and user files on a separate NFS cluster thus automated the changes made. I only run Virtualmin on one of the nodes so if it goes down management is impacted. In order to provide the HA services I use heartbeat and locate the mail like services on one node and the web services on the other with cross failover.

I hope this helps.

Same type of configuration here.

I’ve been pondering how to cluster virtualmin’s management interface for high availability and load balancing, but I haven’t had a chance to seriously look into it yet.

Can you elaborate on how you managed the clustering using a shared NFS mount? Which files did you put on the shared mount and which other files do you have to eventually keep synchronized?

Thanks in advance!
Niclas

KrisKenyon wrote:

I run a cluster with Virtualmin. The way I do it is to move all shared configuration and user files on a separate NFS cluster thus automated the changes made. I only run Virtualmin on one of the nodes so if it goes down management is impacted. In order to provide the HA services I use heartbeat and locate the mail like services on one node and the web services on the other with cross failover.

I hope this helps.

Hello
If anyone can help me or direct me any resource i will be appreciate for it.I want to use Virtualmin to serve small reliable hosting platform.I want to use clustering use virtualmin .MySQL and mail servers SAN or NFS to storage system .Is there anyone out to help me and maybe someone has got network diagram and similar data and how to set up clustering .Thank you very much.

I’m also interested in this feature for next year (purchasing a second server about then).

But, this is a topic that I think could be a good addition to the documentation wiki.

Could the people who have succeeded in this write up something on the wiki (rather than here)? At that point, we can “trick” Joe and Jamie to support it, since it’s on the documentation pages (little joke here).

I know there’s some sort of clustering in Webmin, but does it handle Virtualmin files, etc. as well?

I’d like to see this officially part of the package at some point.

Hey Kris,

That really depends on what you’re doing. Quite a bit is stored outside of /etc/webmin–all of the actual configuration files for all of the services are elsewhere, for example.

If you can give me a specific use case, I can probably be more specific–like DNS clustering is well-documented (check the wiki) and well understood, or mail with LDAP (maybe we can leash some of the folks here who are using LDAP for mail users into answering questions), or MySQL tables, etc. Clustering is a many faceted thing, and everybody needs something a little or a lot different.

Yes, Ive noticed the features in VM for a time, and been trying to find info re: cluster/ing in Wiki, and it doesnt seem to yet be there.

Joe - can you add this asap!?

Can someone provide a list of configuration files that need bo be synced from the master to the slave server?

This would be very useful.

(Bump) I’ve been looking at this for some time and really hoping that Virtualmin will put together a howto or a plugin/feature that allows users to have a mirrored hot-swap(in the least) and ideally loadbalanced servers. This is something I’ll be experimenting with here…

http://www.howtoforge.com/high_availability_loadbalanced_apache_cluster

I’m hoping I can user virtualmin servers and isolate the Virtualmin server’s management interface that is going to be the “master” of the cluster…and use rsync scripts or the cluster features of Virtualmin to execute the synchronization between the two servers.

Is this even reasonable? Has anyone tried this sort of thing? I’d like to know if this is a rabbit hole not worth the time…

Unison would be a much better choice than rsync for this. :slight_smile:

That said, I had started developing the Apache Clustering module and stopped, for lack of time, and apparent lack of interest. If there’s sufficient interest I’ll look into finishing it…

Load-balancing and fail-over has a significant appeal to most sys-admins in the context of client service and disaster recovery - if there were a module that implemented web services (80/443/etc) I would consider investing in it before I work on building out our new systems.

Joe mentioned the concept of a “hot swap” server being on the horizon, but naturally, the greatest appeal is High-Availability AND load-balancing. Right now just getting the topology of our network solid - but as this develops and if I actually get it to work, then I’ll be sure fill people in.

In the meantime, unison is a good suggestion and I’ll look into it.
http://www.cis.upenn.edu/~bcpierce/unison/index.html

Thanks!

TonyShadwick wrote:

Unison would be a much better choice than rsync for this. :)

That said, I had started developing the Apache Clustering module and stopped, for lack of time, and apparent lack of interest. If there’s sufficient interest I’ll look into finishing it…


Any chance you’d be interested in finishing it or providing your code? I’m very interested in getting something like this done, but if some work has already been made, I don’t want to reinvent the wheel.

(bump)
I would love it if someone who has created a “hot-swap” setup would share a few details of how they have done it. I’ve been using rsync and I’ve not had any luck with how to configure the file transfer and which config files where to transfer over…

Thanks for any help you can offer!

So far, this is how I’ve done it.

Use 4 servers. 2 are NFS/file servers in a heartbeat/drbd setup. Plenty of details on that, although howtoforge has lots of good guides that I recommend. (not associated with them)

The 2 NFS servers are "backend", the other 2 are "frontend"

Setup a private IP to be shared on the NFS servers with failover. Frontend servers use this IP for mounting directories, that way if the primary NFS server dies, the secondary immediately takes over.

Frontend servers serve HTTP, HTTPS, MySQL, SMTP, POP3, IMAP, etc… These are load balanced using heartbeat and ldirectord. Again, plenty of articles on that. Static IPs for SSL domains need to be shared.

The following directories are mounted on the Frontend servers from the NFS servers - essentially "shared":
/home
/var/lib/php
/etc/postfix
/etc/apache/sites-available
/etc/apache/sites-enabled

You could probably also share the /etc/webmin and /etc/usermin folders, although I haven’t looked into that fully.

Setup MySQL to run on both Frontend servers in a circular replication. Good notes here http://www.onlamp.com/pub/a/onlamp/2006/04/20/advanced-mysql-replication.html

All 4 of my servers are the same in terms of horsepower - CPU, RAM, HDD, etc. But you could probably use slower/older machines for the NFS/Backend servers.

I tried doing this setup on just 2 servers, but had problems with NFS Client and NFS Server running on the same machines. Whenever trying to simulate a failover, the client wouldn’t release files/folders for DRBD.

Need to make sure both Frontend servers are setup in the Webmin Cluster modules. Then whenever you add a domain or user, you need to refresh the users/groups, and then re-synchronize them on both servers. Then restart apache, restart postfix. Also setup domain transfers in Bind on both machines too.

Sorry, didn’t mean to post that 4 times. And it won’t let me edit now. =/

I have an very similar setup as follows:

7 virtual servers as follows: / Virtualmin GPL with some customizations

2 - linux-ha/drbd redundant content servers making web content available via nfs
2 - front-end webservers load balanced via ldirectord, mounting above nfs content
2 - linux-ha/drbd redundant mysql DB servers
1 - Developer server with all the user accounts, Virtualmin, also mounting nfs content.

/etc/apache2 and /var/lib/php4 and /var/lib/php5 are also shared mountpoints on the apache servers. (ie. no synching necessary)

Works well, but I am facing memory shortages on the webservers at 150 domains right now which I need to resolve.
Also, because of the customizations I have made to VirtualMin, I am always nervous about upgrading when new releases come out, or even about purchasing the commercial version.

I would be interested in comments regarding performance / limitations from anyone else with a similar setup.

What if i only have two machines to use? Don’t have the budget right now for “4.” I really appreciate your detailed response. This is exactly what I’ve been trying to implement as far as backup and disaster recovery. I was hoping to see a “hot swap” feature - where ServerA.com was updated to ServerB.com and ServerB.com was an up to date stand by. Is DRBD my best recourse?

DRBD is great for keeping partitions on 2 separate machines in sync. I’ve heard it described as a kind of “distributed raid array”.

I guess this would work well if you used DRBD on your web-content partition. The only drawback is that you can only access one of the DRBD drives at any given time (this might have changed with more recent DRBD releases) - which might be just what you want. Heartbeat can take care of starting apache on the backup server when the primary one goes down.

If it were me, though, I would want to set it up somehow so that you could use both webservers to load balance all the time, and just point them at the "live" content on one server or the other.

The way I would do this is to set up an NFS server on the DRBD content, and mount it as a separate NFS mount point on both servers. (ie. one server would be mounting its own content via NFS - the other would be mounting the opposite’s content. The source of this mount would automatically change whenever the other DRBD host became primary, but the content would always be the same)
Then I would point my apache instances to this mount point, and load balance them using ldirectord.

In this way you could take advantage of both apache servers whenever both machines were turned on, and if one ever went down it would automatically fail-over to the remaining server without interruption.

This is kind of what I figured…

In the VirtualMin paradigm though, everything is by user and the apache content directory is a pluralism. Basically, to accurately provide fail over AND load balancing, all the /home/[user_xyz]/public_html directories have to be in sync. (As opposed to /var/www/htdocs/ or something like that) In addition to that, in the virtual hosting model, there are databases and variances in the php.ini configurations per user…and user passwords/id/groups and such are stored elsewhere…meaning that if /home/[user_xyz] is in sync, it does not mean that the users, data, and virtualmin working parts and pointers are all in sync.

I don’t have much experience with NFS but I think I understand what you are suggesting. I am using older gear, so my intranet is 100MB and I would assume that this would affect performance in terms of read(s)/write(s) to the NFS mounted volumes? Basically, all but “boot” would need to be replicated…I’ve never used the load balancing module you mentioned.

Right now - I am just striving to get the Hot Swap to work. DRBD is the soundest means I can think of to replicate all the databases, users, files, and such. Basically, the "/" partition on serverA is the same as the "/" partition on serverB.

This is a simple guide…
http://wiki.centos.org/HowTos/Ha-Drbd

But of course if you did not have the foresight to keep some of your drive partition un-allocated, then you’ll have to either use external meta-data for DRBD or “shrink” that partition’s file system.

http://www.drbd.org/users-guide/ch-internals.html
(The external metadata option is, um, not so good for recovery.)

…and of course, shrinking a mounted "/" is not exactly fun…

http://www.howtoforge.com/linux_resizing_ext3_partitions

I’ll let you know how this adventure plays out. Many VirtualMin users have a configuration similar to the one you have mentioned. I’m a bit leery about using NFS - but I am willing to sacrifice a little performance if it means solid fail-over…

Thanks for your suggestions!