My server disks are filling at an alarming rate

Operating system: Ubuntu
OS version: 16.04.3

I think I went from 53% to 63% used without adding anything more than some minor rows to a table in my database this week, which is a procedure I’ve been doing for over 100 weeks in a row.

On this server, I’m running Webmin only. It’s not an outward-facing server. I have about 1-TB of total useable space on the RAID. I don’t know what tools would be best to discover what is happening regarding the rapid consumption of space. I’m not doing things that should add so much data. The main function of this server is to have a MySQL server running which is accessed by a Virtualmin based outward-facing server. This problematic server, managed with the standard Webmin, can’t touch MySQL because I upgraded to version MySQL 8.0.22 instead of the version available from APT. I use myPHPadmin to deal with the database, but that doesn’t show total disk space.

The databases I created on this server are sized:

  • DB1 = 96.0 KiB
  • DB2 = 4.6 MiB
  • DB2 = 186.6 MiB
  • DB2 = 84.0 GiB
  • DB2 = 6.0 KiB

What type of commands or tools could I execute to discover what is going wrong? How do I get a list of biggest to smallest files, or all the ones that were created recently or were edited recently?

List of files 50 Megabytes and larger
find / -type f -size +50M -exec du -h {} \; | sort -n

Directory using most inodes
find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n

1 Like

This should list the size of each directory tree:

find . -depth -type d -exec du -BM -s {} \;

Check du parameters on your machine to format the output in suitable units.

Pipe through less and/or sort -nr etc.

1 Like

find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n
didn’t show anything out of the ordinary.

find / -type f -size +50M -exec du -h {} \; | sort -n
listed off about 150 binlog._______ files for the MySQL server each of which is over 1.1G, then an unknown number below that size. I have a bunch of IBD’s that are between 10G and 40G because I have many indexes on huge datasets, but I think it’s the bin-logs that are my problem.

I don’t remember doing it, but it appears I have replication turned on for the data server, so the logs are there for nearly no reason at all.

find . -depth -type d -exec du -BM -s {} \;
That is a cool command. I found a bunch of old MySQL dumps I forgot about from 2 years ago when I did something major to the server. Don’t need them anymore.

Lots of advice on how to stop a REPLICA (aka: slave) server, but very little on how to stop a Master from going. A lot of sites said to comment out this in the /etc/mysql/my.cnf file:

# replicate-same-server-id = 0
# master-host = 192.168.0.105
# master-user = slaveuser
# master-password = akst6Wqcz2B
# master-connect-retry = 60

My my.cnf was a symbolic link, to begin with, and no file ending in .cnf had any of those lines in it.

I used some advice from this site:


so now my /etc/mysql/mysql.cnf file has:
[mysqld]
skip-slave-start
skip-log-bin

at the bottom.

And this site is the real deal for solving my issue:


specifically the part about executing the following commands:
cd /var/lib/mysql
service mysql stop
rm -f master.info relay-*`
service mysql start

After all that, I just needed to delete the binlog.______ files and the binlog index file.

I went from 63% local disk space use this morning to 35% disk space use now.

This topic was automatically closed 4 days after the last reply. New replies are no longer allowed.