MariaDB stops every few hours

I am having problems with the MariaDB database server stopping every few hours. I have to enter Virtualmin and hit “Start” for it to work again, it has already happened to me twice this day.

My VPS has 16GB of RAM but it is in use around 91%. Could it be a RAM memory problem?

– LOG mariadb.log.rpmsave –
230922 13:17:21 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
230922 13:17:21 [Note] /usr/libexec/mysqld (mysqld 5.5.68-MariaDB) starting as process 27952 …
230922 13:17:21 InnoDB: The InnoDB memory heap is disabled
230922 13:17:21 InnoDB: Mutexes and rw_locks use GCC atomic builtins
230922 13:17:21 InnoDB: Compressed tables use zlib 1.2.7
230922 13:17:21 InnoDB: Using Linux native AIO
230922 13:17:21 InnoDB: Initializing buffer pool, size = 128.0M
230922 13:17:21 InnoDB: Completed initialization of buffer pool
InnoDB: The first specified data file ./ibdata1 did not exist:
InnoDB: a new database to be created!
230922 13:17:21 InnoDB: Setting file ./ibdata1 size to 10 MB
InnoDB: Database physically writes the file full: wait…
230922 13:17:21 InnoDB: Log file ./ib_logfile0 did not exist: new to be created
InnoDB: Setting log file ./ib_logfile0 size to 5 MB
InnoDB: Database physically writes the file full: wait…
230922 13:17:21 InnoDB: Log file ./ib_logfile1 did not exist: new to be created
InnoDB: Setting log file ./ib_logfile1 size to 5 MB
InnoDB: Database physically writes the file full: wait…
InnoDB: Doublewrite buffer not found: creating new
InnoDB: Doublewrite buffer created
InnoDB: 127 rollback segment(s) active.
InnoDB: Creating foreign key constraint system tables
InnoDB: Foreign key constraint system tables created
230922 13:17:21 InnoDB: Waiting for the background threads to start
230922 13:17:22 Percona XtraDB (http://www.percona.com) 5.5.61-MariaDB-38.13 started; log sequence number 0
230922 13:17:22 [Note] Plugin ‘FEEDBACK’ is disabled.
230922 13:17:22 [Note] Server socket created on IP: ‘0.0.0.0’.
230922 13:17:22 [Note] Event Scheduler: Loaded 0 events
230922 13:17:22 [Note] /usr/libexec/mysqld: ready for connections.
Version: ‘5.5.68-MariaDB’ socket: ‘/var/lib/mysql/mysql.sock’ port: 3306 MariaDB Server
230922 13:19:12 [Note] /usr/libexec/mysqld: Normal shutdown
230922 13:19:12 [Note] Event Scheduler: Purging the queue. 0 events
230922 13:19:12 InnoDB: Starting shutdown…
230922 13:19:17 InnoDB: Shutdown completed; log sequence number 1597945
230922 13:19:17 [Note] /usr/libexec/mysqld: Shutdown complete

230922 13:19:17 mysqld_safe mysqld from pid file /var/run/mariadb/mariadb.pid ended

RAM usage always grows to almost full. That’s how caching works. You should consider “buffer” and “cache” usage as “free”.

Could be. Even with a lot of RAM, a process that randomly dies is quite often because the OOM killer killed it.

This looks like a normal shutdown. There was a long thread about a problem that looked exactly like this one, which never came to a satisfactory conclusion about cause (but memory is my best guess). MariaDB error with mysql is not running on system

I probably wouldn’t try to get anything useful out of that conversation, though, as it was very chaotic and has a lot of confusing guesses, theories, and troubleshooting attempts. But, in the end the OP disabled some problematic WordPress plugins and the problem was resolved (which points to memory, indirectly).

Something I didn’t mention is that I have 21 domains hosted on this VPS and therefore 21 databases for each online store. This is the Virtualmin resource usage report:

Running processes → 344
CPU load averages → 0.39 (1 min) 0.32 (5 mins) 0.36 (15 mins)
Real memory → 14.12 GiB used / 1.12 GiB cached / 15.51 GiB total|

So should I try increasing the RAM?

Check for OOM killer messages in the kernel log.

How can I locate the kernel log? (Centos 7)

Note: The log above is mariadb.log.rpmsave, Where do I locate the error log?

dmesg

rpmsave is an old log (from before an upgrade of the mariadb package). That’s not useful information.

You didn’t say if they were indeed WordPress sites. This was eventually identified as the cause of the problem on that topic. (A plugin as usual not the core) InnoDB tables have long been a problem with MySQL and require very careful handling especially concerning foreign key indexing. The problem with plugins is that they are usually designed and tested in isolation, the next one has no knowledge of the others running and often plays with the database accordingly.

I disagree with Joe I don’t think it is a memory problem as such that is simply the way it is being revealed. Throwing more memory at it will just put off the inevitable. You have to identify the process that is shutting down MariaDB. Again if these are WP sites then be sure they are all up-to-date. That applies especially to your OS and MariaDB.

BTW 21 domains in 16G is not that much and those CPU averages are just fine.

An app leaking gigabytes of memory is still a “memory problem”. :wink:

For sure. But just giving it more memory is not a fix. Give it all the memory possible and it is still going to fail eventually.

We need to identify the leak, isolate it, and fix it.

Those CPU averages would be a lot higher if it were a systemic leak.
MariaDB is not failing it is being deliberately shut down.

That other topic was resolved by

I asked before - Are any of these VS running WordPress?

This is the output of the dmesg command:

All the document in this link → WeTransfer - Send Large Files & Share Photos Online - Up to 2GB Free

.
.
.[1434572.899637] [19346] 48 19346 64259 1535 126 0 0 httpd
[1434572.899641] [19364] 48 19364 64247 1498 126 0 0 httpd
[1434572.899645] [19367] 48 19367 64258 1499 126 0 0 httpd
[1434572.899649] [19368] 48 19368 64256 1504 126 0 0 httpd
[1434572.899652] [19383] 0 19383 28852 50 13 0 0 sh
[1434572.899656] [19390] 0 19390 68908 3771 88 0 0 firewall-cmd
[1434572.899661] [19392] 0 19392 50362 469 52 0 0 proftpd
[1434572.899664] Out of memory: Kill process 18630 (mariadbd) score 14 or sacrifice child
[1434572.902346] Killed process 18630 (mariadbd), UID 27, total-vm:2324624kB, anon-rss:233480kB, file-rss:0kB, shmem-rss:0kB
[1436814.496727] sh (22935): drop_caches: 3

OOM killer. So, yes, memory problem.

You’re running out of memory. That indicates something pathological, probably…16GB is a lot of RAM for more small-scale hosting servers. So, you either have an app/plugin that’s doing a lot more than its fair share of memory use, or you’ve configured Mariadb or some other app in some inappropriate way (cranking buffers/cache too high, for instance).

Look at top and sort by memory (hit M, that’s <shift>-<m>). See who’s big. Then figure out why it’s big.

1 Like

Yes, the 21 web pages work under WordPress. While I have free RAM, the pages work but as the processes increase to more than 300, problems begin as they use all the RAM, so I want to know if it is just a RAM problem or if the processes are increasing uncontrollably.

Running processes → 344

dmesg command result complete → WeTransfer - Send Large Files & Share Photos Online - Up to 2GB Free

[1434572.899637] [19346] 48 19346 64259 1535 126 0 0 httpd
[1434572.899641] [19364] 48 19364 64247 1498 126 0 0 httpd
[1434572.899645] [19367] 48 19367 64258 1499 126 0 0 httpd
[1434572.899649] [19368] 48 19368 64256 1504 126 0 0 httpd
[1434572.899652] [19383] 0 19383 28852 50 13 0 0 sh
[1434572.899656] [19390] 0 19390 68908 3771 88 0 0 firewall-cmd
[1434572.899661] [19392] 0 19392 50362 469 52 0 0 proftpd
[1434572.899664] Out of memory: Kill process 18630 (mariadbd) score 14 or sacrifice child
[1434572.902346] Killed process 18630 (mariadbd), UID 27, total-vm:2324624kB, anon-rss:233480kB, file-rss:0kB, shmem-rss:0kB
[1436814.496727] sh (22935): drop_caches: 3

Don’t repost the same log entries again. We saw it, and got what we needed out of it. You need to proceed from there to narrow down the problem.

What processes? How many domains are you hosting on this server?

What execution mode are you using? And, have you installed mod_php (mod_php makes Apache huge, and a very busy server would consume a ton of extra memory with mod_php installed, even if you aren’t using it).

Just 21 Web pages or do you mean 21 domains each running WordPress. → either case that is not very big.
If it is 21 domains is it a specific domain that is using up the memory? or all of them due to a specific operation/process. Especially if a plugin in WP is badly written (frequently the problem) and a process is being fired and not isolated. (not being allowed to complete/timeout) before rerunning. hence causing the number of processes to escalate each taking another chunk of memory to the point you run out.

Joe’s question of using mod_php is important and needs answering, we have seen this before and it really is a killer.

I have 21 hosted domains, each one works with WordPress - WooCommerce and other open source plugins because they are online stores. I am using PHP-FPM 7.4.33

I have restarted the server and now there are only 202 processes and the ram usage is 33% but as the processes increase the RAM usage increases and the problem arises.

These are the 340 processes that were active when the database was stopped.

ID Owner Size Command
23712 mysql 201.83 MiB /usr/sbin/mariadbd
979 named 155.5 MiB /usr/sbin/named -u named -c /etc/named.conf
24607 bluefly 127.5 MiB php-fpm: pool 169592578914685
24609 bluefly 124.29 MiB php-fpm: pool 169592578914685
22704 bluefly 115.14 MiB php-fpm: pool 169592578914685
323 cleanpro 111.85 MiB php-fpm: pool 169602882724328
22702 ironcat 107.82 MiB php-fpm: pool 16959253019240
22706 cleanpro 104.73 MiB php-fpm: pool 169602882724328
9176 ironcat 93.33 MiB php-fpm: pool 16959253019240
20964 sanitek 91.19 MiB php-fpm: pool 16975191513915
28894 ironcat 90.66 MiB php-fpm: pool 16959253019240
24200 root 87.95 MiB /usr/libexec/webmin/authentic-theme/stats.cgi
19335 misho 85.75 MiB php-fpm: pool 16973225382174
23972 misho 83.46 MiB php-fpm: pool 16973225382174
22715 fumicorp 82.27 MiB php-fpm: pool 169716889125940
8297 fumitienda 81.5 MiB php-fpm: pool 16973227605842
14098 sda 78.8 MiB php-fpm: pool 169611119315996
19356 misho 77.77 MiB php-fpm: pool 16973225382174
22703 bluefly 77.56 MiB php-fpm: pool 169592578914685
21111 dacorp 76.77 MiB php-fpm: pool 16954321631497
26344 desinfecciones 76.69 MiB php-fpm: pool 16960295211544
9004 ironcat 75.08 MiB php-fpm: pool 16959253019240
9619 fumistore 74.14 MiB php-fpm: pool 16973229509413
21112 dacorp 74 MiB php-fpm: pool 16954321631497
20963 sanitek 73.87 MiB php-fpm: pool 16975191513915
9625 desinfecciones 73.24 MiB php-fpm: pool 16960295211544
11096 fumistore 72.25 MiB php-fpm: pool 16973229509413
25846 bluefly 71.22 MiB php-fpm: pool 169592578914685
32096 fumitienda 68.78 MiB php-fpm: pool 16973227605842
564 ironcat 68.18 MiB php-fpm: pool 16959253019240
22711 fumix 67.98 MiB php-fpm: pool 169611287626022
22712 fumix 67.84 MiB php-fpm: pool 169611287626022
21113 dacorp 67.59 MiB php-fpm: pool 16954321631497
22724 fumitienda 67 MiB php-fpm: pool 16973227605842
21085 dacorp 66.5 MiB php-fpm: pool 16954321631497
5584 fumix 65.81 MiB php-fpm: pool 169611287626022
22858 fumitienda 64.98 MiB php-fpm: pool 16973227605842
6604 fumicorp 64.95 MiB php-fpm: pool 169716889125940
23718 fumicorp 64.6 MiB php-fpm: pool 169716889125940
23142 fumix 64.56 MiB php-fpm: pool 169611287626022
22723 fumitienda 64.54 MiB php-fpm: pool 16973227605842
8519 fumitienda 64.39 MiB php-fpm: pool 16973227605842
22716 fumicorp 63.74 MiB php-fpm: pool 169716889125940
1311 fumicorp 63.43 MiB php-fpm: pool 169716889125940
1829 fumicorp 62.79 MiB php-fpm: pool 169716889125940
27990 fumicorp 61.83 MiB php-fpm: pool 169716889125940
23084 cleanpro 61.74 MiB php-fpm: pool 169602882724328
16016 fumicorp 58.14 MiB php-fpm: pool 169716889125940
20570 ironcat 57.42 MiB php-fpm: pool 16959253019240
22698 elgaserito 55.42 MiB php-fpm: pool 16957576059961
22701 ironcat 52.9 MiB php-fpm: pool 16959253019240
22727 dralizbeth 52.43 MiB php-fpm: pool 1697514077327
4924 elgaserito 52.07 MiB php-fpm: pool 16957576059961
22697 elgaserito 50.26 MiB php-fpm: pool 16957576059961
4922 elgaserito 49.97 MiB php-fpm: pool 16957576059961
7254 elgaserito 49.88 MiB php-fpm: pool 16957576059961
26175 elgaserito 49.8 MiB php-fpm: pool 16957576059961
3975 elgaserito 49.6 MiB php-fpm: pool 16957576059961
30232 elgaserito 48.36 MiB php-fpm: pool 16957576059961
9807 fumitotal 48.22 MiB php-fpm: pool 169578142025785
23396 dralizbeth 47.96 MiB php-fpm: pool 1697514077327
9801 fumitotal 47.7 MiB php-fpm: pool 169578142025785
22728 dralizbeth 47.48 MiB php-fpm: pool 1697514077327
9686 fumitotal 47.42 MiB php-fpm: pool 169578142025785
5786 dralizbeth 46.82 MiB php-fpm: pool 1697514077327
8553 fumitotal 46.4 MiB php-fpm: pool 169578142025785
17840 dralizbeth 45.78 MiB php-fpm: pool 1697514077327
9590 fumimarket 45.11 MiB php-fpm: pool 16971693551323
8559 fumitotal 44.93 MiB php-fpm: pool 169578142025785
8405 fumitotal 44.05 MiB php-fpm: pool 169578142025785
9649 fumimarket 43.21 MiB php-fpm: pool 16971693551323
28880 ironcat 42.94 MiB php-fpm: pool 16959253019240
9760 fumitotal 42.78 MiB php-fpm: pool 169578142025785
920 root 41.58 MiB /usr/bin/python2 -s /usr/bin/fail2ban-server -xf start
9676 fumimarket 41.57 MiB php-fpm: pool 16971693551323
9687 fumimarket 41.53 MiB php-fpm: pool 16971693551323
9771 fumitotal 41.14 MiB php-fpm: pool 169578142025785
26320 sda 40.27 MiB php-fpm: pool 169611119315996
9695 fumimarket 40.26 MiB php-fpm: pool 16971693551323
9626 fumimarket 39.96 MiB php-fpm: pool 16971693551323
9618 fumimarket 39.87 MiB php-fpm: pool 16971693551323
32554 sda 39.51 MiB php-fpm: pool 169611119315996
22736 sdacorp 39.31 MiB php-fpm: pool 169784838632271
4696 sdacorp 38.47 MiB php-fpm: pool 169784838632271
9677 fumistore 38.29 MiB php-fpm: pool 16973229509413
26345 sda 38.16 MiB php-fpm: pool 169611119315996
11460 roedores 38.12 MiB php-fpm: pool 169751685621255
14096 sda 37.71 MiB php-fpm: pool 169611119315996
9616 desinfecciones 37.69 MiB php-fpm: pool 16960295211544
7232 sdacorp 37.52 MiB php-fpm: pool 169784838632271
8424 fumix 37.49 MiB php-fpm: pool 169611287626022
9689 fumistore 37.43 MiB php-fpm: pool 16973229509413
26373 desinfecciones 37.42 MiB php-fpm: pool 16960295211544
24355 sda 37.04 MiB php-fpm: pool 169611119315996
21070 fumihouse 36.95 MiB php-fpm: pool 169716913229786
9600 desinfecciones 36.91 MiB php-fpm: pool 16960295211544
26377 roedores 36.9 MiB php-fpm: pool 169751685621255
9665 fumimarket 36.64 MiB php-fpm: pool 16971693551323
28552 sdacorp 36.63 MiB php-fpm: pool 169784838632271
26318 desinfecciones 36.63 MiB php-fpm: pool 16960295211544
22735 sdacorp 36.37 MiB php-fpm: pool 169784838632271
10775 sda 36.34 MiB php-fpm: pool 169611119315996
20969 sanitek 36.3 MiB php-fpm: pool 16975191513915
9648 desinfecciones 36.28 MiB php-fpm: pool 16960295211544
9653 fumistore 35.77 MiB php-fpm: pool 16973229509413
19303 fumix 35.52 MiB php-fpm: pool 169611287626022
21114 dacorp 34.94 MiB php-fpm: pool 16954321631497
20031 roedores 34.86 MiB php-fpm: pool 169751685621255
19317 misho 34.75 MiB php-fpm: pool 16973225382174
19326 misho 34.24 MiB php-fpm: pool 16973225382174
30642 roedores 34.15 MiB php-fpm: pool 169751685621255
20028 roedores 34.12 MiB php-fpm: pool 169751685621255
16257 fumigaciones 34.05 MiB php-fpm: pool 169611327630598
28995 odonto 34.03 MiB php-fpm: pool 16975153049623
9633 fumistore 33.62 MiB php-fpm: pool 16973229509413
9544 fumigaciones 33.51 MiB php-fpm: pool 169611327630598
11452 roedores 33.51 MiB php-fpm: pool 169751685621255
4127 roedores 32.64 MiB php-fpm: pool 169751685621255
32649 cleanpro 32.3 MiB php-fpm: pool 169602882724328
9522 fumigaciones 32.3 MiB php-fpm: pool 169611327630598
19349 misho 32.25 MiB php-fpm: pool 16973225382174
28988 odonto 32.24 MiB php-fpm: pool 16975153049623
9601 fumigaciones 32.23 MiB php-fpm: pool 169611327630598
1808 misho 32.2 MiB php-fpm: pool 16973225382174
3824 fumigaciones 32.16 MiB php-fpm: pool 169611327630598
11848 sda 32.16 MiB php-fpm: pool 169611119315996
13601 sdacorp 32.12 MiB php-fpm: pool 169784838632271
28989 odonto 32.1 MiB php-fpm: pool 16975153049623
18926 desinfecciones 32.09 MiB php-fpm: pool 16960295211544
13596 sdacorp 32.03 MiB php-fpm: pool 169784838632271
20965 sanitek 32.02 MiB php-fpm: pool 16975191513915
20955 sanitek 31.98 MiB php-fpm: pool 16975191513915
21059 fumihouse 31.93 MiB php-fpm: pool 169716913229786
20958 sanitek 31.9 MiB php-fpm: pool 16975191513915
21058 fumihouse 31.82 MiB php-fpm: pool 169716913229786
21079 fumihouse 31.65 MiB php-fpm: pool 169716913229786
9542 fumigaciones 31.57 MiB php-fpm: pool 169611327630598
11097 fumistore 31.45 MiB php-fpm: pool 16973229509413
522 odonto 31.44 MiB php-fpm: pool 16975153049623
19309 misho 31.41 MiB php-fpm: pool 16973225382174
21083 dacorp 31.03 MiB php-fpm: pool 16954321631497
9726 fumigaciones 31 MiB php-fpm: pool 169611327630598
9666 fumistore 30.25 MiB php-fpm: pool 16973229509413
523 roedores 30.09 MiB php-fpm: pool 169751685621255
4883 fumix 30.03 MiB php-fpm: pool 169611287626022
21071 fumihouse 29.98 MiB php-fpm: pool 169716913229786
21057 fumihouse 29.93 MiB php-fpm: pool 169716913229786
21067 fumihouse 29.8 MiB php-fpm: pool 169716913229786
20966 sanitek 29.62 MiB php-fpm: pool 16975191513915
21061 fumihouse 29.53 MiB php-fpm: pool 169716913229786
20957 sanitek 29.13 MiB php-fpm: pool 16975191513915
1804 fumix 28.66 MiB php-fpm: pool 169611287626022
15223 fumigaciones 28.42 MiB php-fpm: pool 169611327630598
21115 dacorp 28.21 MiB php-fpm: pool 16954321631497
28978 odonto 28.06 MiB php-fpm: pool 16975153049623
28996 odonto 28.03 MiB php-fpm: pool 16975153049623
26401 odonto 27.91 MiB php-fpm: pool 16975153049623
593 root 27.67 MiB /usr/bin/python2 -Es /usr/sbin/firewalld --nofork --nopid
21110 dacorp 26.2 MiB php-fpm: pool 16954321631497
20032 odonto 26.02 MiB php-fpm: pool 16975153049623
27203 cleanpro 25.1 MiB php-fpm: pool 169602882724328
1842 cleanpro 23.84 MiB php-fpm: pool 169602882724328
1192 root 18.95 MiB /usr/bin/perl /usr/libexec/usermin/miniserv.pl /etc/usermin/miniserv.conf
1267 root 18.57 MiB /usr/bin/perl /usr/libexec/webmin/miniserv.pl /etc/webmin/miniserv.conf
1186 root 18.25 MiB /usr/bin/perl /usr/libexec/usermin/miniserv.pl /etc/usermin/miniserv.conf
7943 root 18.25 MiB /usr/bin/perl /usr/libexec/usermin/miniserv.pl /etc/usermin/miniserv.conf
1024 root 18.25 MiB /usr/bin/perl /usr/libexec/usermin/miniserv.pl /etc/usermin/miniserv.conf
1168 root 18.25 MiB /usr/bin/perl /usr/libexec/usermin/miniserv.pl /etc/usermin/miniserv.conf
3626 root 18.25 MiB /usr/bin/perl /usr/libexec/usermin/miniserv.pl /etc/usermin/miniserv.conf
14597 root 18.21 MiB /usr/bin/perl /usr/libexec/usermin/miniserv.pl /etc/usermin/miniserv.conf
14626 root 18.21 MiB /usr/bin/perl /usr/libexec/usermin/miniserv.pl /etc/usermin/miniserv.conf
14684 root 18.21 MiB /usr/bin/perl /usr/libexec/usermin/miniserv.pl /etc/usermin/miniserv.conf
14643 root 18.2 MiB /usr/bin/perl /usr/libexec/usermin/miniserv.pl /etc/usermin/miniserv.conf
14747 root 18.2 MiB /usr/bin/perl /usr/libexec/usermin/miniserv.pl /etc/usermin/miniserv.conf
32254 root 18.18 MiB /usr/bin/perl /usr/libexec/usermin/miniserv.pl /etc/usermin/miniserv.conf
14494 root 18.17 MiB /usr/bin/perl /usr/libexec/usermin/miniserv.pl /etc/usermin/miniserv.conf
14348 root 18.16 MiB /usr/bin/perl /usr/libexec/usermin/miniserv.pl /etc/usermin/miniserv.conf
3522 root 17.35 MiB /usr/bin/perl /usr/libexec/webmin/miniserv.pl /etc/webmin/miniserv.conf
7178 root 17.34 MiB /usr/bin/perl /usr/libexec/webmin/miniserv.pl /etc/webmin/miniserv.conf
538 grmilter 15.86 MiB /usr/sbin/milter-greylist -D
594 postgrey 13.69 MiB postgrey --unix=/var/spool/postfix/postgrey/socket --pidfile=/var/run/postgrey.p …
908 root 13.21 MiB /usr/bin/python2 -Es /usr/sbin/tuned -l -P
22694 root 10.89 MiB php-fpm: master process (/etc/opt/remi/php74/php-fpm.conf)
391 root 10.34 MiB /usr/lib/systemd/systemd-journald
18137 apache 10.2 MiB /usr/sbin/httpd -DFOREGROUND
18143 apache 10.16 MiB /usr/sbin/httpd -DFOREGROUND
18957 apache 10.07 MiB /usr/sbin/httpd -DFOREGROUND
549 polkitd 9.88 MiB /usr/lib/polkit-1/polkitd --no-debug
20252 apache 9.83 MiB /usr/sbin/httpd -DFOREGROUND
21371 apache 9.82 MiB /usr/sbin/httpd -DFOREGROUND
20253 apache 9.78 MiB /usr/sbin/httpd -DFOREGROUND
21372 apache 9.61 MiB /usr/sbin/httpd -DFOREGROUND
21797 apache 9.51 MiB /usr/sbin/httpd -DFOREGROUND
913 root 9.27 MiB /usr/sbin/rsyslogd -n
22981 apache 9.21 MiB /usr/sbin/httpd -DFOREGROUND
24136 apache 8.03 MiB /usr/sbin/httpd -DFOREGROUND
22737 apache 8.02 MiB php-fpm: pool www
22738 apache 8.02 MiB php-fpm: pool www
22739 apache 8.02 MiB php-fpm: pool www
22740 apache 8.02 MiB php-fpm: pool www
22741 apache 8.02 MiB php-fpm: pool www
24138 apache 8 MiB /usr/sbin/httpd -DFOREGROUND
1106 root 7.37 MiB /usr/sbin/httpd -DFOREGROUND
1119 root 7.03 MiB dovecot/config
24139 apache 6.8 MiB /usr/sbin/httpd -DFOREGROUND
23006 postfix 6.78 MiB smtpd -n smtps -t inet -u -o stress= -s 2 -o smtpd_sasl_auth_enable=yes -o smtpd …
23018 postfix 6.48 MiB smtpd -n smtp -t inet -u -o stress= -s 2 -o smtpd_sasl_auth_enable=yes -o smtpd_ …
23588 postfix 6.05 MiB smtpd -n submission -t inet -u -o stress= -s 2 -o smtpd_sasl_auth_enable=yes -o …
23589 postfix 6.05 MiB smtpd -n submission -t inet -u -o stress= -s 2 -o smtpd_sasl_auth_enable=yes -o …
23611 postfix 6.05 MiB smtpd -n submission -t inet -u -o stress= -s 2 -o smtpd_sasl_auth_enable=yes -o …
23585 postfix 6.04 MiB smtpd -n submission -t inet -u -o stress= -s 2 -o smtpd_sasl_auth_enable=yes -o …
23615 postfix 6.04 MiB smtpd -n submission -t inet -u -o stress= -s 2 -o smtpd_sasl_auth_enable=yes -o …
25025 apache 5.65 MiB /usr/sbin/httpd -DFOREGROUND
4862 root 5.42 MiB php-fpm: master process (/etc/php-fpm.conf)
4863 apache 5.08 MiB php-fpm: pool www
4864 apache 5.08 MiB php-fpm: pool www
4865 apache 5.08 MiB php-fpm: pool www
4866 apache 5.08 MiB php-fpm: pool www
4867 apache 5.08 MiB php-fpm: pool www
1118 opendkim 4.94 MiB /usr/sbin/opendkim -b sv
18498 postfix 4.01 MiB pickup -l -t unix -u
23007 postfix 4.01 MiB proxymap -t unix -u
23011 postfix 4 MiB anvil -l -t unix -u
1 root 3.2 MiB /usr/lib/systemd/systemd --switched-root --system --deserialize 22
596 root 2.71 MiB /usr/sbin/NetworkManager --no-daemon
21010 root 2.46 MiB /usr/sbin/CROND -n
572 root 2.1 MiB /usr/sbin/saslauthd -m /run/saslauthd -a pam -r
574 root 2.1 MiB /usr/sbin/saslauthd -m /run/saslauthd -a pam -r
570 root 2.09 MiB /usr/sbin/saslauthd -m /run/saslauthd -a pam -r
571 root 2.09 MiB /usr/sbin/saslauthd -m /run/saslauthd -a pam -r
573 root 2.09 MiB /usr/sbin/saslauthd -m /run/saslauthd -a pam -r
1440 nobody 2.03 MiB proftpd: (accepting connections)
1426 postfix 1.68 MiB qmgr -l -t unix -u
1552 postfix 1.51 MiB tlsmgr -l -t unix -u
21015 root 1.41 MiB /bin/bash /usr/share/clamav/freshclam-sleep
557 dbus 1.36 MiB /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd- …
905 root 1.26 MiB /usr/sbin/sshd -D
1390 root 1.24 MiB /usr/libexec/postfix/master -w
23016 root 1.22 MiB /usr/lib/systemd/systemd-hostnamed
428 root 1.17 MiB /usr/lib/systemd/systemd-udevd
21012 root 1.17 MiB /bin/sh -c /usr/share/clamav/freshclam-sleep > /dev/null
579 root 1.1 MiB /usr/lib/systemd/systemd-logind
588 root 956 KiB /usr/sbin/crond -n
561 chrony 860 KiB /usr/sbin/chronyd
1113 root 740 KiB /usr/sbin/dovecot
550 rpc 568 KiB /sbin/rpcbind -w
1116 root 548 KiB dovecot/log
555 root 512 KiB /usr/sbin/irqbalance --foreground
21017 root 360 KiB sleep 4143
1115 dovecot 340 KiB dovecot/anvil
953 root 128 KiB /sbin/agetty --noclear tty1 linux
2 root 0 kB [kthreadd]
4 root 0 kB [kworker/0:0H]
6 root 0 kB [ksoftirqd/0]
7 root 0 kB [migration/0]
8 root 0 kB [rcu_bh]
9 root 0 kB [rcu_sched]
10 root 0 kB [lru-add-drain]
11 root 0 kB [watchdog/0]
12 root 0 kB [watchdog/1]
13 root 0 kB [migration/1]
14 root 0 kB [ksoftirqd/1]
16 root 0 kB [kworker/1:0H]
17 root 0 kB [watchdog/2]
18 root 0 kB [migration/2]
19 root 0 kB [ksoftirqd/2]
21 root 0 kB [kworker/2:0H]
22 root 0 kB [watchdog/3]
23 root 0 kB [migration/3]
24 root 0 kB [ksoftirqd/3]
26 root 0 kB [kworker/3:0H]
27 root 0 kB [watchdog/4]
28 root 0 kB [migration/4]
29 root 0 kB [ksoftirqd/4]
31 root 0 kB [kworker/4:0H]
32 root 0 kB [watchdog/5]
33 root 0 kB [migration/5]
34 root 0 kB [ksoftirqd/5]
36 root 0 kB [kworker/5:0H]
38 root 0 kB [kdevtmpfs]
39 root 0 kB [netns]
40 root 0 kB [khungtaskd]
41 root 0 kB [writeback]
42 root 0 kB [kintegrityd]
43 root 0 kB [bioset]
44 root 0 kB [bioset]
45 root 0 kB [bioset]
46 root 0 kB [kblockd]
47 root 0 kB [md]
48 root 0 kB [edac-poller]
49 root 0 kB [watchdogd]
55 root 0 kB [kswapd0]
56 root 0 kB [ksmd]
57 root 0 kB [khugepaged]
58 root 0 kB [crypto]
66 root 0 kB [kthrotld]
68 root 0 kB [kmpath_rdacd]
69 root 0 kB [kaluad]
70 root 0 kB [kpsmoused]
72 root 0 kB [ipv6_addrconf]
86 root 0 kB [deferwq]
123 root 0 kB [kauditd]
272 root 0 kB [ata_sff]
276 root 0 kB [ttm_swap]
284 root 0 kB [scsi_eh_0]
285 root 0 kB [scsi_tmf_0]
286 root 0 kB [scsi_eh_1]
287 root 0 kB [scsi_tmf_1]
289 root 0 kB [virtscsi-scan]
290 root 0 kB [scsi_eh_2]
291 root 0 kB [scsi_tmf_2]
311 root 0 kB [jbd2/sda3-8]
312 root 0 kB [ext4-rsv-conver]
416 root 0 kB [kworker/0:1H]
504 root 0 kB [jbd2/sda2-8]
505 root 0 kB [ext4-rsv-conver]
1547 root 0 kB [dio/sda3]
1549 root 0 kB [kworker/1:1H]
1571 root 0 kB [kworker/2:1H]
1578 root 0 kB [kworker/3:1H]
1581 root 0 kB [kworker/5:1H]
1634 root 0 kB [kworker/4:1H]
2351 root 0 kB [kworker/5:2]
11028 root 0 kB [kworker/4:2]
15283 root 0 kB [kworker/u12:0]
16869 root 0 kB [kworker/2:0]
16871 root 0 kB [kworker/4:0]
18931 root 0 kB [kworker/5:0]
18933 root 0 kB [kworker/0:0]
19360 root 0 kB [kworker/3:2]
19845 root 0 kB [kworker/1:2]
20577 root 0 kB [kworker/1:1]
20770 root 0 kB [kworker/0:1]
20960 root 0 kB [kworker/u12:1]
20962 root 0 kB [kworker/3:0]
23652 root 0 kB [kworker/4:1]
23711 root 0 kB [kworker/1:0]
24203 root 0 kB [/usr/libexec/we]
26105 root 0 kB [kworker/2:2]

What I want to know is that if I increasing the RAM would solve the problem or would the processes simply continue to increase until the RAM is overloaded again?

This is the configuration I have for the database:

That is not “increasing the RAM”. That’s telling Mariadb to use more RAM. Since you don’t have enough RAM, telling processes to use more seems like a terrible idea. That can only make it worse, but probably isn’t directly related to your OOM killer events. (None of the default config sizes would cause Mariadb to grow really huge for your system…something else is getting really big.)

This is not a configurable limit that you’re running into. The kernel is literally running out of system memory to allocate. Nothing you configure that increases memory usage (i.e. telling processes to cache more aggressively, etc.) is going to make that problem better. Adding more memory to the system would likely solve it, but also you probably have something consuming more than its fair share.

I don’t see anything egregiously large in your process list, but if one of your apps is hanging onto a bunch of running PHP-FPM processes, or causing really huge long-lasting database queries, when a bunch of requests happen that trigger that…well, resource-usage would balloon up.

Increasing physical RAM is just going to be like throwing more meat to the lions!
It is not going to solve anything → just maybe delay things a little. Basically you have a memory leak.

I presume that this is a production server. You also don’t have a development server on which to experiment. I still believe it is one of those WP plugins at fault. Finding which one is going to be difficult on a production box. Your PHP logs may help → if you can trace the commands out of the WP plugins at around the time that the MariaDB goes down.
On a development box it would be simpler (isolate the domain, then identify the plugin)

Hi! I have the same issue with 9 actives domains and 9 Wordpress. Should I have to open a new topic or I can continue this one? I’m not sure where to find the logs to show you them tho…

You should open a new one, but read this thread first, to see what information we need to be able to help you. (How to see logs, kernel log, in particular, is discussed above, for instance. Since your problem is probably the OOM killer, you need to check the kernel log for OOM killer events.)

1 Like