Performance with apache on VPS

I have been getting woeful page loading times on my Contabo VPS 8 x cores 30GB RAM.
The seo and php app is good and solid i had it running previously on more expensive vps with double ram and achieved 2 second page loads. This isnt a post about page optimisation but servoptinsiation - the page is pretty good scoring 93 on lighthouse best practices.

I suspect the disk I/O on cheap vps is to blame but i wanted to squeeze as much performance out of the setup (LAMP stack) as possible. I have added some htaccess apache modifications `and given maria db a little performance tuning/config and seen some better results. However now I am looking at apache. I had previously used WHM and cpanel with and nginx proxy for static caching which was brilliant. However it seems that setting up Webmin from outset with nginx is preferred rather than changing from apache or running both. So options seems to be to utilise mod_cache from apache but i cant see anyone on the forums here talking about using it or configuring it. I can see the modules are on the server and could include them in the load module config. I know this is potentially a broad topic but i am trying to keep the frame around squeezing more out of whats available. So questions:

  1. Has anyone experience of configuring mod_cache with apache on webmin (Almalinux)?
  2. Does a webmin LEMP stack generally out perform a webmin LAMP stack?
  3. What static caching solutions have others used with webmin?

I ran fio and got the below but not sure what that says so if you can interpret please feel free to feedback on.
TIA

===========
fio --name=rand-write --ioengine=libaio --iodepth=32 --rw=randwrite --invalidate=1 --bsrange=4k:4k,4k:4k --size=512m --runtime=120 --time_based --do_verify=1 --direct=1 --group_reporting --numjobs=1
rand-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.35
Starting 1 process
rand-write: Laying out IO file (1 file / 512MiB)
Jobs: 1 (f=1): [w(1)][100.0%][w=3992KiB/s][w=998 IOPS][eta 00m:00s]
rand-write: (groupid=0, jobs=1): err= 0: pid=1257245: Tue Mar 26 14:28:24 2024
write: IOPS=1075, BW=4301KiB/s (4404kB/s)(504MiB/120034msec); 0 zone resets
slat (usec): min=6, max=110921, avg=67.49, stdev=802.31
clat (usec): min=89, max=144702, avg=29685.63, stdev=12129.36
lat (usec): min=131, max=160943, avg=29753.12, stdev=12144.60
clat percentiles (usec):
| 1.00th=[ 816], 5.00th=[ 1418], 10.00th=[ 10683], 20.00th=[ 27132],
| 30.00th=[ 30540], 40.00th=[ 31327], 50.00th=[ 31851], 60.00th=[ 32113],
| 70.00th=[ 32637], 80.00th=[ 33817], 90.00th=[ 36963], 95.00th=[ 42730],
| 99.00th=[ 66847], 99.50th=[ 79168], 99.90th=[114820], 99.95th=[127402],
| 99.99th=[139461]
bw ( KiB/s): min= 3187, max=45748, per=100.00%, avg=4306.61, stdev=3323.12, samples=239
iops : min= 796, max=11437, avg=1076.55, stdev=830.79, samples=239
lat (usec) : 100=0.01%, 250=0.01%, 500=0.06%, 750=0.51%, 1000=1.94%
lat (msec) : 2=3.92%, 4=1.70%, 10=1.72%, 20=3.02%, 50=84.30%
lat (msec) : 100=2.64%, 250=0.18%
cpu : usr=1.39%, sys=5.29%, ctx=79760, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=0,129063,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
WRITE: bw=4301KiB/s (4404kB/s), 4301KiB/s-4301KiB/s (4404kB/s-4404kB/s), io=504MiB (529MB), run=120034-120034msec

Disk stats (read/write):
sda: ios=179/129987, merge=30/7876, ticks=321/3761091, in_queue=3761474, util=98.69%

Apache is not why your website is slow. It’s never why a website is slow. (Though if you installed mod_php, that might be contributing to the problem.)

You haven’t mentioned what application(s) you’re running, and what tuning you’ve done to the database(s) that back them, so I don’t really have any advice for further troubleshooting.

Also, you need to include your OS and version.

https://forum.virtualmin.com/guidelines

You’re not the only one, go to https://httpd.apache.org/ and scroll to the bottom- Apache is looking for someone to write a better caching guide. When I saw it a couple of months ago I took it up as my side project to learn apache caching hopefully to the level needed for that guide someday. But right now it’s going really slowly because I can only set aside a couple of hours every few weeks for learning non mission-critical stuff. The call for a better caching guide has been up since 2016, so you may have to wait a bit more until a better cache guide is available. Current apache 2.5(or rather 2.6) status mentions some development of cache features so I guess it’s not completely dead.

No. We don’t do holy wars here. If you like nginx, use nginx. If you don’t like Apache, don’t use Apache.

1 Like

Can you spin up an instance on a better machine for testing? Even a virtual server?

1 Like

I’ve deleted off-topic Apache vs. nginx stuff. It doesn’t help OP solve their problems. Please don’t make me police the forums.

3 Likes

(repost because I replied to the wrong person)

Try running the following command:

$ dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync

This will give you a one line estimate of disk latency. On our server(2x2.5Ghz 2GB RAM - Hetzner shared vCPU) it’s around 0.5s and that’s pretty close to the loading times of one very low-traffic site we host with no caching whatsoever. Keep in mind, this is only true for initial page visits, consecutive ones are automatically cached to ram and are around 5 times faster, hovering near 0.1s.

I tested this out on my WSL Debian with a 3.5GHz quad-core and Samsung 970 evo nvme, and it spat out 0.01s latency so at least in our case our Hetzner VM takes most of the blame for sluggish raw performance, not Apache.

Also, if it’s of any relevance- our current server is running with vm.swappiness=100 to test out Chris Down’s article on swap ( In defence of swap: common misconceptions ) and so far everything he said checks out. I recently spun up a more powerful server at Hetzner with 3x2.5Ghz and 4GB RAM with no swap, a dedicated vCPU VM and the results were pretty much the same for initial page loads. So I think the bottleneck in our case is disk operations on the vps parent, which can only be solved by migrating to a bare metal server or to a different provider with less congested vps parents(I think Contabo is similar if not worse), and for us that’s neither viable nor necessary at the moment.

1 Like

We are also Hetzner Dedicated vCPU customer. I tested your command. Results:

1 Like

My bad | SYSTEM INFORMATION||
|----------------------|---------------------------|
| OS type and version | AlmaLinux 9.3 |
| Webmin version | 2.105 |
| Usermin version | 2.005 |
| Virtualmin version | 7.10.0 |
| Theme version | 21.09.5 |
| Package updates | All installed packages are up to date |

Hi Joe, thx for comments - definitely not trying to start a war here. Just looking for data on performance differences. App in question is Craftcms which I’ve used for the last 8 years on various servers. Its possibly some more optimisation might be squeezed out of its templating queries to avoid DB N+1 problems but Ive already factored that in.
Im interested in squeezing the most out of the hardware with the software (I think i skimped on VPS) so was looking for additional apache config for static caching as my quick win.

1000+0 records in

1000+0 records out

512000 bytes (512 kB, 500 KiB) copied, 1.13227 s, 452 kB/s

[root@vmi1532308 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync

1000+0 records in

1000+0 records out

512000 bytes (512 kB, 500 KiB) copied, 1.5637 s, 327 kB/s

[root@vmi1532308 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync

1000+0 records in

1000+0 records out

512000 bytes (512 kB, 500 KiB) copied, 0.706295 s, 725 kB/s

[root@vmi1532308 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync

1000+0 records in

1000+0 records out

512000 bytes (512 kB, 500 KiB) copied, 0.783974 s, 653 kB/s

I ran this several times and got the below:
1000+0 records in

1000+0 records out

512000 bytes (512 kB, 500 KiB) copied, 1.13227 s, 452 kB/s

[root@vmi1532308 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync

1000+0 records in

1000+0 records out

512000 bytes (512 kB, 500 KiB) copied, 1.5637 s, 327 kB/s

[root@vmi1532308 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync

1000+0 records in

1000+0 records out

512000 bytes (512 kB, 500 KiB) copied, 0.706295 s, 725 kB/s

[root@vmi1532308 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync

1000+0 records in

1000+0 records out

512000 bytes (512 kB, 500 KiB) copied, 0.783974 s, 653 kB/s

Does the variability suggests that VPS is overloading and not as performant? My previous VPS from Contabo was great albeit more expensive with more ram.

On the apache from I couldn’t seem to get mod_cache to show signs of working and I have the page loads around 2 seconds now so its not all bad. The TTFB is poor though and I think I’ll leave it at this for now until it becomes a real issue.

You’re fine. The messages I deleted for being off-topic were not yours. I’m trying to keep the topic focused on solving your problems rather than arguing about silly stuff like Apache vs. nginx performance (the web server is never the bottleneck).

Yeah that latency seems to be significantly worse. And while mine constantly hovers around 0.5s , yours spikes to over 1.5 seconds. It’s most probably an issue with congestion on the vps parent and is caused by the Contabo ultra-tight-budget approach. I don’t think it will get much better as long as you’re on Contabo vps.

As for mod_cache, I got it to work…sort of. The cache root was populated succesfully but the headers reported cache misses and initial page loading time was still slow. I ran out of time to troubleshoot further during my last session, and guides I followed were extremely basic, but I’ll keep trying to make it work.

1 Like

Caching static files on the same machine is mostly a no-op (takes more memory, probably won’t help performance).

There are some kinds of caching that might be helpful in this sort of deployment (query caching for your database, opcode caching for your applications), but, it aint gonna be mod_cache. If you use PHP-FPM you get opcode caching for free. Mariadb is also pretty good at caching if you give it enough memory. Adding other services for caching is often only useful if you have multiple machines, otherwise you’re just taking memory away from the actual services you’re trying to provide. The OS-level disk IO caching is going to be better than what mod_cache can do.

OP, you need to isolate your performance problem. Deciding to cache some random thing when you don’t even know what’s slow is not a productive use of time. It’ll probably just waste memory (and your time).

Edit: And running a whole other webserver for caching on the same system (e.g. nginx+Apache) is dumb. It’s indefensible on the kinds of systems people run control panel software on. Utter nonsense. Waste of memory.

And running a whole other webserver for caching on the same system (e.g. nginx+Apache) is dumb. It’s indefensible on the kinds of systems people run control panel software on. Utter nonsense. Waste of memory.

Thesis, please? Why do you believe running Nginx and Apache is bad? Nginx is faster on static resources. Apache is faster in dynamic rendering/processing. It makes sense to use them both if you need near-perfect performance.

The only obvious case I see is when the load is spread across multiple machines. You are repeating the oft quoted talking points but I cannot easily find any data that supports it, or more importantly, what setups would benefit. People are already running VM on marginally acceptable hardware.

I would not personally do it myself. Nginx + FastCGI is more than enough. But if you have millions of active users, every millisecond matters. The more optimized your application and servers are, the more you save on servers to not compensate for mistakes. I used Apache for many years. It did a great job and served well. You won’t be wrong by picking either one unless you have millions of active users. Only then does more complex scalability come to mind. Everything depends on your budget.

then we could afford the resources of Google/Amazon/Microsoft until then we probably will remain “individual hobbyists”

1 Like