In Linux, a process in an uninterruptible sleep state (D state) is typically waiting for I/O operations (like disk or network) and cannot be killed until the system call completes.
root@vps-2153e875:~# ps -eo pid,state,comm | awk '$2 == "D"'
2662710 D find
root@vps-2153e875:~# cat /proc/2662710/stat
2662710 (find) D 2660486 2662710 2660486 34817 2662710 4456448 162 0 0 0 5 40 0 0 20 0 1 0 470174814 5050368 576 18446744073709551615 94339750293504 94339750436697 140720879474736 0 0 0 0 0 0 1 0 0 17 0 0 0 0 0 0 94339750490160 94339750499624 94340756328448 140720879479322 140720879479338 140720879479338 140720879480810 0
Unlike standard “idle” processes, those in state D contribute to the system load average, often causing high load numbers even if CPU usage is low.
Common causes of a process in D state are NFS issues (a lost connection to a NFS is the most frequent cause) or Failing Hardware (hardware malfunctions that prevent I/O completion).
If the resource becomes available (e.g., the NFS server comes back online), the process will resume automatically. For storage issues, attempting a “lazy” unmount (umount -l) or resetting the specific hardware device may help. If the process is stuck due to a kernel bug or permanently lost hardware, a system reboot is often the only way to clear it.
Other cases of a process in D state could be when there is a storage performance bottleneck. Let’s see this with the following example.
Let’s say I have a new WordPress server which is working extremely slowly, taking between 1-2 minutes to load the main page. There are no network or authentication issues between the WordPress server and its database. There are no long-time-running queries on the database.
After running the ps command to check the processes in D state, I found the following Apache processes in D state.
root@vps-2153e875:~# ps aux | grep ' D '
www-data 309395 0.0 0.8 355572 70984 ? D 11:07 0:02 apache2 -DFOREGROUND
www-data 334001 0.0 0.5 269508 45144 ? D 12:45 0:00 apache2 -DFOREGROUND
Running a strace on one of the previous processes (PID=309395), I see that the Apache process is listing and accessing a large number of php files, one by one, on the Web directory tree ‘/var/www/html‘, but the Apache process is not permanently blocked by doing nothing.
root@vps-2153e875:~# strace -r -f -s 1024 -p 309395
strace: Process 309395 attached
0.000000 newfstatat(AT_FDCWD, "/var/www/html/wp-includes/class-wp-oembed-controller.php", {st_mode=S_IFREG|0644, st_size=6905, ...}, 0) = 0
0.000466 newfstatat(AT_FDCWD, "/var/www/html/wp-includes/media.php", {st_mode=S_IFREG|0644, st_size=221186, ...}, AT_SYMLINK_NOFOLLOW) = 0
0.155414 newfstatat(AT_FDCWD, "/var/www/html/wp-includes/media.php", {st_mode=S_IFREG|0644, st_size=221186, ...}, 0) = 0
0.000422 newfstatat(AT_FDCWD, "/var/www/html/wp-includes/http.php", {st_mode=S_IFREG|0644, st_size=25878, ...}, AT_SYMLINK_NOFOLLOW) = 0
0.155305 newfstatat(AT_FDCWD, "/var/www/html/wp-includes/http.php", {st_mode=S_IFREG|0644, st_size=25878, ...}, 0) = 0
0.000259 newfstatat(AT_FDCWD, "/var/www/html/wp-includes/html-api/html5-named-character-references.php", {st_mode=S_IFREG|0644, st_size=80163, ...}, AT_SYMLINK_NOFOLLOW) = 0
...
0.000252 newfstatat(AT_FDCWD, "/var/www/html/wp-includes/rest-api/endpoints/class-wp-rest-menus-controller.php", {st_mode=S_IFREG|0644, st_size=17077, ...}, AT_SYMLINK_NOFOLLOW) = 0
0.155581 newfstatat(AT_FDCWD, "/var/www/html/wp-includes/rest-api/endpoints/class-wp-rest-menus-controller.php", {st_mode=S_IFREG|0644, st_size=17077, ...}, 0) = 0
0.000174 newfstatat(AT_FDCWD, "/var/www/html/wp-includes/rest-api/endpoints/class-wp-rest-menu-locations-controller.php", {st_mode=S_IFREG|0644, st_size=8963, ...}, AT_SYMLINK_NOFOLLOW) = 0
...
Listing the file systems inside the Kubernetes POD where I have the Apache+WordPress installation, I can see that the directory ‘/var/www/html’ is actually a NFS mount.
root@vps-2153e875:~# kubectl exec -it frontend-rq9qd -n ns-adibexpress -- df -h
Filesystem Size Used Avail Use% Mounted on
overlay 72G 9.9G 62G 14% /
tmpfs 64M 0 64M 0% /dev
/dev/sda1 72G 9.9G 62G 14% /etc/hosts
shm 64M 0 64M 0% /dev/shm
51.79.160.8:/shared/kubernetes/web 40G 6.2G 32G 17% /var/www/html
tmpfs 7.5G 12K 7.5G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 3.8G 0 3.8G 0% /proc/acpi
tmpfs 3.8G 0 3.8G 0% /proc/scsi
tmpfs 3.8G 0 3.8G 0% /sys/firmware
root@vps-2153e875:~# kubectl exec -it frontend-rq9qd -n ns-adibexpress -- mount | grep nfs
51.79.160.8:/shared/kubernetes/web on /var/www/html type nfs4 (rw,relatime,vers=4.2,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=135.125.174.14,local_lock=none,addr=51.79.160.8)
The fact is that WordPress is often slow on NFS primarily because the WordPress core architecture requires a massive amount of small file lookups, which triggers significant network overhead for metadata operations. A single WordPress page load may require PHP to check hundreds of php files across the network, causing “latency stacking“.
Moving WordPress Web directory from the NFS to the local server storage makes WordPress main page load in 1 second. Another strategy is to keep the core WordPress files and plugins on local SSD storage while using NFS only for user-uploaded media (wp-content/uploads). In this case, you will have to remember to copy all WordPress local SSD files on all Apache nodes (in case you have more than one) every time you update the WordPress version or when you install/update WordPress plugins.
Given in this post I’ve talked about storage performance bottleneck, I also want to mention that ‘sar -d‘ is a great way to detect IO performance issues on local server disk and block-based storage. You should be alert when ‘await‘ is consistently greater than 20 ms, or ‘%util‘ is consistently between 90-100 %.
root@vps-2153e875:~# sar -d
...
21:00:00 DEV tps rkB/s wkB/s dkB/s areq-sz aqu-sz await %util
21:10:00 loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:10:00 sda 13.06 0.69 62.47 0.32 4.86 0.01 0.35 0.17
21:20:00 loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:20:00 sda 12.36 0.60 55.68 30.69 7.04 0.00 0.36 0.15
21:30:00 loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21:30:00 sda 12.57 0.08 56.97 443.82 39.84 0.01 0.37 0.15
...
Adib Ahmed Akhtar