MySQL Shutting Down > Virtualmin Backup > Restore

SYSTEM INFORMATION
OS type and version Debian Linux 12
Webmin version 2.105
Usermin version 2.005
Virtualmin version 7.10.0
Theme version 21.09.5
Package updates All installed packages are up to date

Hi,

I recently setup a new system with Debian 12 server and transferred all the websites using Virtualmin Backup>Restore from the previous Ubuntu 20.04 server. Everything works great, only the MySQL service shuts itself down every couple of days.

I am wondering, because the MySQL version (10.11.6) is quiet different from the old server (probably 8), could it be because of the database differences? The question is, when a Virtual host is transferred through Virtualmin, does it export and import the database, or does it simply copy the files, so it might need to go through an upgrade to fit the new version?

Any help would be appreciated.

Check for OOM (out of memory) errors in your logs. This seems to be a bit a problem for MySQL/MariaDB. Either physical memory or other processes that run away.

1 Like

Note that while you’re right that this symptom is almost always the OOM killer, it’s not usually the “fault” of MySQL/Mariadb. The OOM killer uses heuristics to determine what process will be least dangerous to destroy that will also free up enough memory to allow new allocations to proceed.

When your system runs out of memory it is already a catastrophe. There is no safe way forward, something bad is about to happen. The OOM killer tries to pick something that isn’t very active but also something large enough to free up enough memory to allow allocations for new processes.

Adding swap is a reasonable stop gap measure. Swap is extremely slow, though, so if you care about performance, you should have enough real memory to perform all the usual operations of the system without swapping. But, slow is better than catastrophe.

Reducing memory usage is a potential solution. Adding memory is another.

MySQL/Mariadb may not be the biggest user of memory, even if it’s the process most often killed by the OOM killer (but, also you may not notice other processes being killed…some things are less important than the database). So, you need to figure out where usage is happening, and figure out whether you can reduce it without impacting performance.

But, the short answer is that if the OOM killer is kicking in, you must solve memory problems urgently. Randomly killing processes guarantees loss of service whenever it happens, and likely data loss/corruption eventually.

That is the big red flag for me!

You are making a really major change to your syste. - a change in OS.

such changes must not be done lightly. How well do you understand the code running on each of the websites? and how that code interacts with a database. Are you just assuming code that was written in the language of the website (PHP/MySQL for example) will work perfectly with the current versions loaded by the new OS?

Each of the steps should be made one at a time - with testing. It is amazing what can go wrong in this process! Throwing everything into the mixer and expecting something other than soup out.

This pointed me to the right direction, thank you.

Thanks Joe. A lot of memory is consumed by php-from processes, and I can reduce that significantly with adding some limitations and monitoring. And a larger swap memory on a system with NVME storage might not be too slow, just to keep a buffer. Thank you again.

Thanks for the comment Stegan, this is actually how I have done it, times and times. First one website, making tests, checking logs, then another etc… all the websites run on almost exact code base, same WordPress and set of plugins, so it is not too complicated.