How to migrate from Apache to Nginx

Hello,

I manage a website that receives decent traffic, and I’ve noticed that Apache seems to be causing high CPU usage, often spiking to 100% on my 16-core server. I’ve tried adjusting the variables in the mpm_prefork configuration, which helped for a few days, but the issue has returned.

I’m wondering if there’s an easy way to migrate from Apache to Nginx, especially since I’m not very experienced with server configurations. If there’s a way to do this directly within Virtualmin’s settings, could someone please guide me through the process?

Thanks in advance for your help!

You can usually backup all of your domains and restore them on a new system that is installed with the --bundle nginx option. That may have minor issues, or major ones if you depend on .htaccess files or other Apache features that nginx doesn’t have. Expect it to need a little troubleshooting.

But, I am extremely doubtful Apache is the bottleneck. I am about 145% sure something else is to blame and needs attention. Apache is merely the thing that gets overwhelmed juggling all the connections on the frontend that are waiting on the backend to give it something to serve.

just a very quick comment, I’ve not thought this thru much … have you tried the php-fpm method (?) — as it has much better controls on child processes and cpu usage

Hi Joe, thanks for the reply the issue is website is pretty large almost 600gbs so can’t risk the transfer of whole virtual server. The cpu usage returns to normal once i enable JS challenge from CloudFlare

Hi , thank you for responding can you please share that method

Only my 2 cents worth - that is much more powerful than most of my servers.

All running nginx. with some pretty aggressive websites.

My instinct is that this is not a server problem, but is the website design problem. I hope that 600gbs is principally database and not simply static website code! even if dynamic code it shouldn’t be taking up that sort of space - or requiring that amount of cpu.

Even a database is pretty scary

true. but still depends on what is in there and how well it is managed/maintained - often the last thing on the mind/remit/competence of of a website designer. - again back to the website (just moving it to nginx is not going to fix anything, and might just delay the impact) I am also curious why “JS Challenge” had any effect?

Most of the that 600gb is images only the code it self is only a few gbs. I have multiple other websites of same structure all running well only this one is causing problems i asked hosting provider to change the host system that didn’t help too. I edited the mpm fork file in apache it helped for a few days now the problem has returned.

Yes, i did tried that method but didn’t help either

If the site is mostly images would exit bandwidth trying to answer the requests be a bottleneck? My provider offers ‘unlimited’ bandwidth but I never get over 10 meg. :wink: One trick I saw years back was a provider set the switch to half-duplex. It causes slow downs from collisions.

root@main:~# dmesg | grep -i duplex
[ 17.534709] tg3 0000:02:00.0 eth0: Link is up at 100 Mbps, full duplex

Thanks, I assume that the images are stored in a web efficient format and not simple .bmp and are not relying on a database to manage them dynamically. so if purely file related that should take my assumptions of an issue with the database out of the concern.

so is there any caching of those image files (presumably all the clients are being allowed to cache) - it is possible to prevent this (again careless coded meta in head)

it would be nice to know why you are actually getting temporary relief by thse various actions attempted.

just to clarify (I might have missed it) - this is a PHP driven website with client side JS? and we have eliminated all the usual (PHP classic issues like version, FPM etc)