New Webserver

Howdy all,

I finally made the switch to our new webserver (that was threatened a couple of times over the past couple of weeks, but I ran into a variety of hurdles related to the age and complexity of the old deployment and the fact that we’re moving to an all new infrastructure based on Cloudmin and Cloudmin Services, in addition to Virtualmin).

The only difference you should notice is that everything is just a whole lot faster. The virtual machine it runs on has significantly more resources with which to work (twice the RAM, four times the disks on a much better dedicated controller, twice the number of CPU cores, etc.), and some of the work has been offloaded to other systems via Cloudmin Services.

But, there may be problems that my testing didn’t shake out. I, obviously want to know about them, so if you see any new problems with the website, let me know in the ticket tracker. I hope a couple of known issues will also be resolved by the new server, as well, such as the occasional “Your message has been queued for moderation” problem, but I’m not certain of that…it might just be a bug, rather than a timeout issue.

Anyway, sorry for the sluggish of the site these past few months. It won’t happen again. We’re very well-equipped for expansion and migrations to newer/bigger hardware, as needed, now that we’re running on all Cloudmin infrastructure.


Great news, Joe, Congrats!

Such migrations are always lots of preparation work for a very short migration time.

  • It’s always excellent practice to use ourselves the tools we develop ! :wink:
    Allows us to find and fix lots of little bugs that nobody bothers reporting :wink:

Wondering if you are doing fault tolerance, and what infrastructure you are using for that ?

Definitely a whole lot faster. Great!

Wondering if you are doing fault tolerance, and what infrastructure you are using for that ?

Not much, currently, beyond keeping regular backups, though the plan is in place for the next week or so. I’ll tell you the rough idea of that, but the actual implementation may look different.

We’re using Cloudmin Services for DNS, of course…but the actual fault tolerance is provided by Webmin’s Slave DNS (documented here: ), which I would assume most Virtualmin users are already doing.

We’re not currently using Services for MySQL, but that’s the next step on my scaling agenda. What we’ll do there is build a master/slave replicated MySQL database (using the standard MySQL replication tools; nothing weird or proprietary to Virtualmin), and then have Cloudmin Services manage the databases on behalf of Virtualmin. You could do that today even without Cloudmin Services, actually, since Virtualmin fully supports remote MySQL databases. I don’t think Services adds any functionality related to replication yet, but it’s on the agenda. So, for now, I’ll be setting replication up manually, just like I’d do in a plain ol’ Virtualmin deployment.

Anyway, once the database is on a replicated MySQL server, we no longer have to worry about our application data, and we could, at pretty much any time, switch to another Cloudmin clone of the virtual machine that is hosting our site, and that backing database will always be up to date.

The mail data is trickier, and I plan to just use the forward and hold backup mail server features for that, rather than doing full takeover. If ever a replicated backend storage that is compatible with standard filesystem semantics and not disastrous for performance (or from a cost perspective) comes along, we’ll probably move onto that…Maybe GlusterFS? I’m not sure. I’ve done a lot of research in the area, but still have very little confidence that there is a good scalable, distributed, reliable, filesystem solution.

As an aside, now that we have a production Cloudmin Services setup, we’ll be rolling it into the shop. So, that’ll be available soon.

But, everything we’re doing related to fault tolerance in the near future could also be done with Virtualmin and Webmin, today. There’s very little magic related to fault tolerance in Cloudmin Services, but I mention it because that’s how we happen to be managing some of it in this new deployment.