Failed to open : No space left on device

OS: Ubuntu Linux 20.04.2, Webmin: 1.973, Virtualmin: 6.16

My Virtualmin dashboard suddenly displayed the following error message when I try to access the main system panel as well as other parts of the dashboard:

Failed to open /var/webmin/modules/package-updates/current.cache for writing : No space left on device

The error disappeared and the panel is displayed normally again after I logged out and logged in (multiple times) to Virtualmin.

The server was just set up and there is nothing on it other than one virtual server with a new WordPress site that I installed to test PHP.

I’m worried that it might pop up again when the server has production sites later on so would like to fix this beforehand.

What could be causing the error and how can I fix it?


Running df -h returns the following:

Filesystem         Size  Used Avail Use% Mounted on
/dev/ploop23338p1   30G  3.9G   25G  14% /
none               512M     0  512M   0% /sys/fs/cgroup
none               512M     0  512M   0% /dev
tmpfs              512M     0  512M   0% /dev/shm
tmpfs              103M  2.1M  101M   3% /run
tmpfs              5.0M     0  5.0M   0% /run/lock
none               512M     0  512M   0% /run/shm
tmpfs              103M     0  103M   0% /run/user/1002
tmpfs              103M     0  103M   0% /run/user/1003
tmpfs              103M     0  103M   0% /run/user/1000
tmpfs              103M     0  103M   0% /run/user/1004
tmpfs              103M     0  103M   0% /run/user/1008
tmpfs              103M     0  103M   0% /run/user/1006
tmpfs              103M     0  103M   0% /run/user/0

And running mount returns the following:

/dev/ploop23338p1 on / type ext4 (rw,relatime,data=ordered,balloon_ino=12,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group)
none on /sys type sysfs (rw,relatime)
none on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
proc on /proc type proc (rw,relatime)
none on /dev type devtmpfs (rw,nosuid,mode=755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,size=104860k,mode=755)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=28,pipe_ino=600393980,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=600393980)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
none on /run/shm type tmpfs (rw,relatime)
tmpfs on /run/user/1002 type tmpfs (rw,nosuid,nodev,relatime,size=104856k,mode=700,uid=1002,gid=1002)
tmpfs on /run/user/1003 type tmpfs (rw,nosuid,nodev,relatime,size=104856k,mode=700,uid=1003,gid=1003)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=104856k,mode=700,uid=1000,gid=1000)
tmpfs on /run/user/1004 type tmpfs (rw,nosuid,nodev,relatime,size=104856k,mode=700,uid=1004,gid=1004)
tmpfs on /run/user/1008 type tmpfs (rw,nosuid,nodev,relatime,size=104856k,mode=700,uid=1008,gid=1006)
tmpfs on /run/user/1006 type tmpfs (rw,nosuid,nodev,relatime,size=104856k,mode=700,uid=1006,gid=1005)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=104856k,mode=700)

Please give the output of ‘df -i’ also.

Seeing as your root partition has the name ploop, I’m quite certain you are running a OpenVZ container.
Either your provider set it up with not enough inodes, or more likely they are overselling like crazy.
My suggestion, dump it and run find a proper provider and use KVM, not OVZ.

2 Likes

Yes, it is an OpenVZ container unfortunately and df -i returns the following:

Filesystem         Inodes  IUsed   IFree IUse% Mounted on
/dev/ploop23338p1 1966080 162211 1803869    9% /
none               131072     17  131055    1% /sys/fs/cgroup
none               131072     75  130997    1% /dev
tmpfs              131072      1  131071    1% /dev/shm
tmpfs              131072    536  130536    1% /run
tmpfs              131072      4  131068    1% /run/lock
none               131072      1  131071    1% /run/shm
tmpfs              131072      8  131064    1% /run/user/1002
tmpfs              131072      8  131064    1% /run/user/1003
tmpfs              131072      8  131064    1% /run/user/1000
tmpfs              131072      8  131064    1% /run/user/1004
tmpfs              131072      8  131064    1% /run/user/1008
tmpfs              131072      8  131064    1% /run/user/1006
tmpfs              131072     21  131051    1% /run/user/0

Since you’re not out of inodes, it suggests the host node is full-ish.
You must contact your provider, as they are the only ones who can solve this.

I’ll do that. Thanks for the help @toreskev

This topic was automatically closed 8 days after the last reply. New replies are no longer allowed.