MariaDB stops every few hours

Hello Dear
@Joe

I have the same problem every once in a while mariadb stops

temporary solution
I entered System and Server Status
Placed in If monitor goes down, run command
systemctl restart mysql.service

When it stops, it restarts and works again, a temporary solution

But I would like a final solution to the problem

File log
File: /var/log/mysql/error.log

2024-01-01 2:45:03 0 [Note] Starting MariaDB 10.11.4-MariaDB-1~deb12u1 source revision as process 1675028
2024-01-01 2:45:03 0 [Note] InnoDB: Compressed tables use zlib 1.2.13
2024-01-01 2:45:03 0 [Note] InnoDB: Number of transaction pools: 1
2024-01-01 2:45:03 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
2024-01-01 2:45:03 0 [Note] InnoDB: Using liburing
2024-01-01 2:45:03 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB
2024-01-01 2:45:03 0 [Note] InnoDB: Completed initialization of buffer pool
2024-01-01 2:45:03 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes)
2024-01-01 2:45:04 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=14988233921
2024-01-01 2:45:04 0 [Note] InnoDB: Starting final batch to recover 1426 pages from redo log.
2024-01-01 2:45:04 0 [Note] InnoDB: 128 rollback segments are active.
2024-01-01 2:45:04 0 [Note] InnoDB: Removed temporary tablespace data file: “./ibtmp1”
2024-01-01 2:45:04 0 [Note] InnoDB: Setting file ‘./ibtmp1’ size to 12.000MiB. Physically writing the file full; Please wait …
2024-01-01 2:45:04 0 [Note] InnoDB: File ‘./ibtmp1’ size is now 12.000MiB.
2024-01-01 2:45:04 0 [Note] InnoDB: log sequence number 14998205414; transaction id 10742613
2024-01-01 2:45:04 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2024-01-01 2:45:04 0 [Note] Plugin ‘FEEDBACK’ is disabled.
2024-01-01 2:45:04 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work.
2024-01-01 2:45:04 0 [Note] Server socket created on IP: ‘127.0.0.1’.
2024-01-01 2:45:04 0 [Note] InnoDB: Buffer pool(s) load completed at 240101 2:45:04
2024-01-01 2:45:04 0 [Note] /usr/sbin/mariadbd: ready for connections.
Version: ‘10.11.4-MariaDB-1~deb12u1’ socket: ‘/run/mysqld/mysqld.sock’ port: 3306 Debian 12
2024-01-01 2:45:04 3 [Warning] Access denied for user ‘root’@‘localhost’ (using password: NO)
2024-01-01 2:45:04 4 [Warning] Access denied for user ‘root’@‘localhost’ (using password: NO)

1 Like

As I just said to the other commenter, please open a new topic. And, read through this thread first to get the information we need to see.

You need to check the kernel log (dmesg) for OOM killer messages, as that’s almost certainly what’s happening.

Also I want to point out that if Mariadb is being killed regularly, it guarantees data loss or database corruption eventually (though it is quite robust and tries to deal with all sorts of hardware failures, it can’t completely protect your data if it’s being killed and unable to shut down in an orderly fashion). Restarting it automatically increases the risk of that, because it’s presumably going to be killed again soon after.

If you urgently need to have your database running, you urgently need to figure out why it’s being killed and fix the root cause.

In the end it only increases the ram memory and with that the database stops stopping. It seems that all the ram was being used which reached the point of not being able to continue and killed the mariadb process.

Is there a way to assign a certain maximum amount of RAM to each process?

Yes, but that just means it’ll be killed when it tries to allocate too much.

You need to configure Mariadb appropriately for the amount of RAM you have, if you have very constrained memory. But, you should know that the process that is killed by the OOM killer is not necessarily the misbehaving process or the one that is consuming all the RAM.

Unless you have confirmed Mariadb is the big memory user, you don’t know it’s the one you need to be “fixing”. Could be something else entirely; Mariadb is not known to leak memory. It’s expected to be pretty efficient. I would be inclined to look at your web apps and their configuration and how much memory they’re using.

late to the discussion — a few years ago on our older RedHat webservers, we had a lot of OOM issues – ended up we had too many spawned php child processes using the older FCGI methodology. Once we change to PHP-FPM and limited child processes to just 5 or 10 per virtual host, our OOM problems went away.