edit: you are right, it’s the I/O WAIT that it destroying my performance:
%Cpu(s): 0,3 us, 0,5 sy, 0,0 ni, 50,1 id, 49,0 wa, 0,0 hi, 0,1 si, 0,0 st
I could clearly see it using nmon > d > l > -
such as was suggested by @SayCyberOnceMore. Not quite sure what to do about it, as it’s simply my sdb1
drive which is a Samsung 1TB 2.5" HDD. I have now ordered a 2TB SSD and maybe I am going to reinstall from scratch on that new drive as sda1. I realize that’s just treating the symptom and not the root cause, so I should probably also look for that root cause. But that’s for another Lemmy thread!
I really don’t understand what is causing this. I run a few very small containers, and everything is fine - but when I start something bigger like Photoprism, Immich, or even MariaDB or PostgreSQL, then something causes the CPU load to rise indefinitely.
Notably, the top
command doesn’t show anything special, nothing eats RAM, nothing uses 100% CPU. And yet, the load is rising fast. If I leave it be, my ssh session loses connection. Hopping onto the host itself shows a load of over 50,or even over 70. I don’t grok how a system can even get that high at all.
My server is an older Intel i7 with 16GB RAM running Ubuntu22. 04 LTS.
How can I troubleshoot this, when ‘top’ doesn’t show any culprit and it does not seem to be caused by any one specific container?
(this makes me wonder how people can run anything at all off of a Raspberry Pi. My machine isn’t “beefy” but a Pi would be so much less.)
run top and paste the output the top portion of the screen.
I would suspect it is IO wait. You can get into disk contention if you have multiple containers fighting for disk. You will notice the IO queue is building up and that is shows you are waiting for IO transactions.
%Cpu(s): 67.4 us, 13.0 sy, 0.0 ni, 19.4 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st
See the field labeled WA, that is wait time. Basically time you are waiting for IO to complete.
If that is high, you can increase the cache used by Linux BUT if the system crash you are at risk of losing saves.
“load” is not “CPU usage.” It’s “system usage” and includes disk and network activity. Including swapping if you’re low on memory.
vmstat can tell you what your disk io looks like. Iotop can help with narrowing it down to a process.
The last time I saw this was on a slow-failing HDD.
Check a quick fsck might get you a few answers. You can find more info in the Linux manual. It could just be one or two bad blocks that you can recover and fix the problem (though, ofc, it’s time to backup your data).
The other, slightly unusual time I’ve seen it is with mixed RAM. 16gb made of 2x6g and then 2x4gb did some real odd things to the system. If it’s not the disk, and your box will boot with one stick of ram, try it to see if it fixes the issue. It could be that your RAM speeds are off (or your like me and just put two sticks you had lying around, and it basically worked until it didn’t).
An outlier, that I’ve not seen on modern machines is io/wait for a CD-ROM to spin up, even if your not accessing the CD-ROM. Normally caused by bad cabling. Based on the age of your machine, this is unlikely, but it might be worth unplugging devices to see if one is bad and not reporting properly.
This is, if course, assuming dmsg is empty
Final thought: see if your running SELinux. If you are, turn it off and try again. Those policies are complex, and something installed in a non-standard place could be causing SELinux to slow IO as it fills your logs with warnings.
Hope that helps,
Do not run fsck on a mounted device
So how do I run this on /dev/sda? I can’t very well unmount the OS drive…
many people aren’t running containers on RBpi … while feasible, it was notoriously poor until the 8GB pi4, and still is easily bounded by SD card I/o. are there docker stats so you can see the disk + net I/o of each container?
I’d try each application one by one. Maybe write a script to monitor load and stop the program if it goes past your desired threshold and notify you.
It could also be a setting in some app like photoprism or immich … I think one of them uses tensorflow to classify images. That would increase the load if thats running in the background.
Maybe try them with an empty directory so there is no data to process and see if you encounter the error. Then add some data and see how the load is.