I’m just trying to save you from a bad experience, money but more important time wasted. If you plan to do this for your own site it may not be that much of a work. If you plan to host others people sites. Do you expect to modify the code on every single PHP or web app they upload to make it work with your clustering/platform?
There are better solutions today. Like virtualization that can fail over. That would be much easier and simpler than trying to scale individual services.
The problem with asking this on a forum is that it would require a full forum on its own. It would require pages and pages to explain how to do it properly and this assuming we know exactly all your hardware and network specs + configurations. Like I said before, you actually need to work together with the hardware here. This may sound surprising but its really true. How are you provisioning the IPs in the network and then servers/sites? Do you plan to switch IP’s on a site that is failing over? Do you run BGP sessions? Or how do you plan to fail over the Haproxy otherwise? If you don’t, then you actually have a huge bottle neck in your setup. The load balancer + the storage. So you need to fail over and scale both as well and for that you need to be able to change IP’s dynamically, plus Haproxy alone, assuming you are using the simple hearbeats is too simple. You probably need an external monitoring system to detect when to fail over properly and when to revert it back.
Its ok to ask. Sure. But its not really a question for Virtualmin. Virtualmin does not actually manage any services. Its just a GUI, a Perl software with a web interface to let you click and point manage everything. You can do absolutely everything Virtualmin does by just running command line commands or editing files manually. My point is that Virtualmin can’t scale this for you and there is no easy click and point solution. Virtualmin was never designed for that, and while it does have some features that can help you here, you will need to get dirty by creating your own scripts if you plan for this work. Once you do all this, Virtualmin can’t really manage anything, so its very much useless for that type of environment unless you want it messing up your config files.
Consultants are paid thosuands of dollars for such solutions. And companies may spend even more money for their hardware alone. If you want to this right, you probably need a $200K SAN storage in the first place. If you want to this even better, you need 2 datacenters or 2 installation sites. Using NFS is a home solution. Will it work? Yes. Will it be fast? Nope. Will it have problems with file locks and writes? Sure, once you send anything with a bit of traffic. Can it scale? Nope. Unless you plan different NFS storage to different accounts.
If you plan to offer a simple HA service that can scale as a hosting service. You are looking into a $1 million dollar or more investment. I’m not kidding. That is more or less what you will need in terms of time (people do have to get paid) and hardware in 2 separated facilities. Assuming you plan to do everything yourself and use all open source, you are still looking at $100K in hardware alone for anything you can consider a start up and months of works. Weeks if you know how to build everything but you still need alot of testing. You don’t want to send traffic without load testing everything. In your case, if you plan to do this alone, you actually need to master how Apache, DNS, MySQL, Postfix and other softwares work first. By mastering I mean you need to completely understand how those softwares work and process data. If you don’t you will never be able to fix issues in your platform and not even Virtualmin developers have such extensive knowledge in everything. That assuming you already know everything about networking layers because you need those as well.
You would need to read a book itself just for 1 single service. Example, reading a full book on Apache, is still not enough. Because you need Apache to talk with PHP and MySQL, which means 3 books. And you need DNS sync as well, which means BIND now, and then you need this and that… Assuming you read and know everything, all that is still not real world experience. People that set up these things have years of experience, not months.
What you are basically trying to achieve is not simple. Its very, very complicated. It’s the holy grale of services and I know million dollar companies that did it wrong (and some closed down because of that vs others that did simple things and succeeded). Let me make it clear. There is a reason why Google, Amazon and Microsoft are not offering that. There is no point. Today? Not anymore. You should look into failing complete servers instead of individual services. That is my point. I’m your friend here and you will save time money and frustation. What you are trying to achieve is old school and while I know how do it, I would probably not go that route today.
If you really want a click and point solution that can scale basically unlimited and fail over web applications (without any code changes). Look at Rancher (free open source) or Docker Enterprise Edition (commercial with support).
You will be surprised how easy it is, because doing full containers where the application is not aware of the running services is much easier. Same with servers. (separate the application from the services and the services from the hardware, then separate the data itself). Doing this with full servers, physical or virtual is much easier than doing it with individual services. Trying to scale every single services is useless with most scenarios. And most services will have a limit as well. Example, Apache may have a limit on the number of vhosts it can boot up. It may work with 1000, it may fail with 5,000. Same is true for MySQL and other softwares, you are going to hit a vertical scaling limit (CPU or RAM eventually) and then you can’t scale anymore. You see the problem here? Even if you manage to create such a complex solution, MySQL may be very unstable with lets say 512 GB RAM and 1000 databases running. All softwares are known to memory leaks and bugs. At scale you are going to start hitting those issues like pancake. So its much easier just to manage 100 MySQL servers, than one that does everything.
This also avoids putting all your eggs in the same basket. Sure, failing individual services is still great and not that hard for your own site or a bunch of sites. Assuming they are not extreme high traffic, but scaling a single site or web application is actually an art. Facebook had to develop its own systems because PHP didn’t scale properly or MySQL had problems. So did every single major website with high traffic today. Once you hit a certain point, no software will help you without heavy modifications. Now, if you want to do this for alot of sites, (with a service) that is a nightmare scenario. Let your users do that. And secondly, assuming you even need that. Servers today are so horrible powerful, monster servers, that any server today can handle million of unique visitors with a properly optimized site (Mr. Cache is your friend, turn dynamic pages to static ones).
Even a single monster hardware is enough today to handle most traffics for a site. So scaling horizontally (adding more and more servers) is not a requirement as it was in the past years. Hardly any websites unless you are Amazon or Google needs this. Scaling verticially (adding bigger CPU and RAM) can really serve even huge sites. Huge with millions of visitors a month, in particular when you can buy servers easily with over 512 GB RAM today. My server motherboards, all of them support between 256 GB and 512 GB and they are not even new ones.
I’m your friend here and for your own sanity forget what you are trying to do. Just run individual Virtuamin servers. Master that, and if for some reason you still want this, look into virtualization or containers like I told you. If your concern is uptime then I can inform you that if you are using with a proper datacenter and running quality hardware (server hardware, not desktop hardware) that is not a problem either. Is not worth doing all that for just a few minutes of downtime a year or a 5 minutes a month to run some updates. If you are having outages frequently it means something is not right with your network, servers or Internet providers.
Example, I saw people trying to discover the mystery of why their services crash every 2 weeks because they were so cheap that used standard computing systems with standard RAM sticks on it when they don’t realize that servers use ECC registered Ram for a reason. (error correction). Or having high load because they throw everything into a single drive instead of using a proper RAID hardware card with multiple disks (I/O has a limit). Or people buying budget servers with some vendors that offer desktop machines as servers and then having all sorts of horrible performance and service crashes. That hardware was never designed to run 24/7…Sure you can do that if you are Google because they treat machines as cattle, it doesn’t matter if you burn ten today the data is resilient and lives in multiple systems. If you want that, you may look into CoreOS and KuberDock and but more important Mesos which is another story.
I could go on here for pages and pages, but then again, I have to put food on my table 
Trust my advice. If you want to play with this around to learn, you are welcome. If you want to offer a commercial service, the fact you asked this here means you are not ready, and you would just provide a horrible service to your users or let’s say disservice.
You can trust my word and save time and money and lots of frustrations or just experience the pain on your own. Some people may have time or money to burn. But never both. People that don’t have time, usually have money. People that don’t have money tend to have a lot of free time. If you are the on the last option you can learn everything yourself, if you are the first one, just pay someone to do it for you. In the end, time is money, and that is exactly how you should take it, regardless of the time it took to implement something.
Either way, good luck with your project 