100% of CPU and server turns off for a second

Guys, I have an issue that I can’t find neither a reasoning nor a solution. Lately the CPU has been going up to 100% and sometimes 101%, then server turns off for a few seconds, then it comes back and CPU goes to normal. I’ve disabled unnecessary servers and installed Virtual Memory and still can’t find a solution. Anyone have any idea what could be wrong?.. Thanks in advance…

SYSTEM INFORMATION
OS type and version: UBUNTU 20.04
Webmin version: 1.981
Virtualmin version: 6.16 PRO
Related products version: RECOMMENDED

First off, when you ran the post install configuration wizard did you set things to max? That can have a very big impact. If you did you can re-run the configuration wizard and set things up to use less resources which can take a lot of load off.

Secondly, are you running the Ubuntu desktop? If you are, that’s your problem. It is a complete monster of memory and processor resources. Get rid of it.

I’ve run into a couple of people that had issues like yours and went round and round with them only to find out they were using the desktop that was sucking up all their resources all along.

I found the issue. When I created the swap file I didn’t give it enough memory. Thanks for your input.

1 Like

I never use the desktop…

1 Like

WHAT!?

A CPU usage spike causes your server to turn off!? Really!?

We can’t fix a hardware problem. No advice we give you about Virtualmin will fix broken hardware, and if your server reboots itself on a CPU usage spike, you have a hardware problem.

1 Like

O.K. I’ll look into it. However, it’s not the server turning off, but rather VM…

Are you using Apache?

Your situation is critical. For big problems, big efforts.

Install a good system and app monitor, like Netdata, set it to save the data persistently and look at the data when the problem occurs again.

Identify which processes are responsible for consumption when these peaks occur. The more important this server is, the more time you need to dedicate to find the root cause.

1 Like

I had a similar problem not too long ago. Site was running out of RAM as well as SWAP, and started getting linux kernel OOM messages in syslog. After contacting the host, they confirmed that their main node was having disk errors which caused read/writes to queue up and overflow memory.

Changing the hard drive on the master/main node by the ISP fixed the problem.

However OP already said

I found the issue. When I created the swap file I didn’t give it enough memory. Thanks for your input.

This topic was automatically closed 8 days after the last reply. New replies are no longer allowed.