Applicable to:
- Plesk for Linux
Symptoms
-
When opening Plesk or exporting/importing a database in Plesk, the operation fails with one of the following error messages:
PLESK_INFO: ERROR: Plesk\Exception\Database
DB query failed: SQLSTATE[HY000]: General error: 1021 Disk full (/var/tmp/#sql_3b95_1); waiting for someone to free some space..., <...>
PLESK_INFO: Server Error
500 Plesk\Exception\Database
DB query failed: SQLSTATE[HY000]: General error: 1 Can't create/write to file '/var/tmp/#sql_9d1_0.MAI' (Errcode: 28), <...>
PLESK_INFO: Server Error
500
Zend_Db_Adapter_Exception
SQLSTATE[HY000][2002] No such file or directory
PLESK_INFO: This page isn’t working
203.0.113.2 is currently unable to handle this request.
HTTP ERROR 500 -
Websites with MySQL databases are not accessible with the following error message in a web-browser:
PLESK_INFO: Error establishing a database connection
-
The MySQL service fails to start with the "No space left on device" error in its status:
# systemctl status mariadb.service
...
systemd[1]: Starting MariaDB database server...
systemd[1]: mariadb.service failed to run 'start-pre' task: No space left on device
systemd[1]: Failed to start MariaDB database server.
systemd[1]: mariadb.service failed.In rare cases: systemd[1]: mariadb.service: Found left-over process 888 (mysqld) in control group while starting unit. Ignoring.
systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.or with the following error message in
/var/log/mariadb/mariadb.log
:CONFIG_TEXT: [ERROR] InnoDB: Could not set the file size of './ibtmp1'. Probably out of disk space
Cause
Seems to be often disk space ends because of Plesk backup taking all the space. If you have access please check whenever space ended during backup process. If yes, bug report is required.
-
The MySQL service cannot create temporary files when there is no free disk space limit has been reached on the root partition (or tmp partition if separated):
# df -h /var/lib/mysql /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 50G 50G 20K 100% / -
There is enough disk space available but there are not enough inodes available:
# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
tmpfs 483902 1 483901 1% /dev/shm
/dev/sda3 3276800 3276800 0 100% /
Resolution
First, to bring services back it is required to free-up some space:
-
Connect to a Plesk server via SSH as root.
-
Delete temporary files that are older than 14 days:
# find /tmp -type f -mtime +14 -exec rm {} \;
# find /var/tmp -type f -mtime +14 -exec rm {} \;as well as Plesk temporary files:
# rm -rf /usr/local/psa/PMM/tmp/* /usr/local/psa/tmp/*
-
Check the size and clean package cache (cache of previously downloaded packages) with the following command for Ubuntu:
# du -sh /var/cache/apt/
# sudo apt-get cleanand for Centos:
# yum clean all
-
Next, if disk space is still shows 100% you may want to remove the oldest Plesk backup file. First get the list of backup files by copy/pasting below command:
# /usr/local/psa/admin/bin/pmm-ras --get-dump-list --type=server | grep 'message' | grep -v [0-9]_[0-9]
<message>backup_info_2010230005.xml: </message>
<message>backup_info_2010300005.xml: </message>If you have just one backup, it is better not to remove it. If you have several then the first one in the output is the oldest one. Remove it using its name:
# /usr/local/psa/admin/bin/pmm-ras --verbose --debug --delete-dump --dump-specification=backup_info_2010230005.xml --session-path=/var/log/plesk/PMM
-
If steps 2 and 3 did not help try to find all files bigger than 200 MB size.
# find / -type f -size +200M -exec du -h {} + 2>/dev/null | sort -r -h
248M /var/log/plesk-php74-fpm/error.log
218M /var/lib/mysql/ibdata1In case there will be a log files among them like error.log in above example, you may clean them up:
# echo > /var/log/plesk-php74-fpm/error.log
# service plesk-php74-fpm restart -
Restart services:
# service psa restart
-
Once MySQL and Plesk running back the best way to find what takes space is to use Diskpace Usage Viewer extension.
Note: If you did not find what data to remove likely all files are required and server just running out of space. In such a case it is needed to add/buy additional space to the server. This can be done with the help of hosting provider - company you purchased server itself.
Comments
12 comments
Unfortunately this also did not work. Had to restart the Container this cleared it. Problem was not that the Inodes were full. It was something else
df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/ploop24907p1 4587520 835456 3752064 19% /
@Ops-team
It may happen if there is not enough disk space or free inodes on a hardware node or due to some limits which are set for the container are reached.
If the issue occurs again, try to check if failcnt value for any limit is greater than zero by running the following inside the container:
I resolved it by adding new virtual / portable drive and mounted it to /var/tmp
If you need any help on partitioning formatting & mounting follow this article -->
https://www.digitalocean.com/community/tutorials/how-to-partition-and-format-storage-devices-in-linux
Hello Chaudhry Nabil Shahzad,
Thank you for sharing your experience.
It may be useful for other Pleskians.
I have the same issue but now, I can't even log in to my admin portal, can't do anything to raise a ticket.
I can't login to admin portal as well. Trying to ssh and clear but usage still at 100%
realice todo pero no se resuelve, sigo recibiendo el mismo error, pase de 100% a 30% de espacio utilizado
Hi i have a
@Chaudhry Nabil Shahzad Thank you for Your shared article, https://www.digitalocean.com/community/tutorials/how-to-partition-and-format-storage-devices-in-linux. It would very help full to partition and format storage devices. Now I easily handle it.
Not the only reason. I have space i have the problem. The problem is still not founded?
We're getting this SQLSTATE error, but the "500: Plesk\Exception\Database" one and not the disk space one. The older CentOS6 Plesk servers never had this problem, but ALL our servers we migrated to with CentOS 7 and Plesk with wordpress sites do this - generally a couple times a day. We've been troubleshooting for MONTHS and can't find the issue and have used every practical solution we've found via research. The servers are brand new and are less than 50% full. We've tweaked countless settings for Apache, Nginx, SQL, swappiness, PHP, etc. The servers were all doing this for longer periods and more often initially and over the months the new settings seem to have helped some, but it still happens. No change has ever immediately shown to be a certain fix or help as the situation happens so randomly and inexplicably that we can't determine a cause/effect troubleshooting pattern for tracking.
The Problem:
The CPU load average spikes within seconds from the normal ~3 to 40-60+, and kswapd0 jumps to 100% cpu, and the server including all sites, Plesk, and SSH will be inaccessible. Over 3-5 minutes the load average gradually calms down then everything is totally fine again for hours.
If ANYONE can figure out a true cause/fix PLEASE let us know.
@golfnitro Yes it's really working.
Please sign in to leave a comment.