My little server ran into an issue, and started reporting the error:
No space left on device
No worries, lest figure out which disk has full and clean up…
Using the df command with the -h (for human-readable output) it should be easy to find the issue:
root@server:~# df -h Filesystem Size Used Avail Use% Mounted on udev 483M 0 483M 0% /dev tmpfs 100M 3.1M 97M 4% /run /dev/vda 20G 9.3G 9.4G 50% / tmpfs 500M 0 500M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 500M 0 500M 0% /sys/fs/cgroup cgmfs 100K 0 100K 0% /run/cgmanager/fs tmpfs 100M 0 100M 0% /run/user/1000
Strange. Notice who the /dev/vda is 50% fillled and all other disk devices seems to be finde too. Well after a little digging, thinking and googling, it turns out device space consists of two things - space (for data) on the device and iNodes (the stuff used to mange the space - where the data go - simplified).
So next move was to look at the inodes:
root@server:~# df -i Filesystem Inodes IUsed IFree IUse% Mounted on udev 123562 375 123187 1% /dev tmpfs 127991 460 127531 1% /run /dev/vda 1310720 1308258 2462 100% / tmpfs 127991 1 127990 1% /dev/shm tmpfs 127991 3 127988 1% /run/lock tmpfs 127991 18 127973 1% /sys/fs/cgroup cgmfs 127991 14 127977 1% /run/cgmanager/fs tmpfs 127991 4 127987 1% /run/user/1000
Now, I did not expect the server in this case to have a huge number of files, so something must be off.
Finding where the many files to a little digging too. Starting with this command:
du --inodes -d 1 / | sort -n
It lists who many iNoes are consumed by each directory in the root.
The highest number was in /var, and next step was doing:
du --inodes -d 1 /var | sort -n
Until I found the folder where an extreme number of files was consumed and solved the issue(*).
*) Turned out to be PHP sessions files, which ate the space, which there is an easy solution for.