Severe Performance Degradation in PHP-Based Morse Code Translation Service on Virtualmin Under Concurrent Load

SYSTEM INFORMATION
OS type and version Ubuntu 22.04 LTS (64-bit)
Webmin version 2.105
Virtualmin version 7.30.6
Webserver version Apache 2.4.x with PHP-FPM
Related packages PHP 8.1, OPcache, MySQL 8.0

I am currently facing a significant performance issue with a Morse code translation service that I’ve deployed on a Virtualmin-managed VPS. The application is built using PHP and relies on a lookup-based algorithm to convert plain text into Morse code and vice versa. While everything works smoothly in a local development setup, the production environment behaves very differently. As soon as multiple users start accessing the service at the same time, the response time increases drastically, sometimes taking several seconds for even short input strings. This is becoming a major usability concern, especially since the tool is intended to be lightweight and responsive.

The core of the problem seems to lie in how the server handles concurrent requests. Each translation request triggers the loading of a relatively large associative array that maps characters to Morse equivalents. Although this operation is fast in isolation, under concurrent load it appears to strain server resources. I suspect that PHP-FPM process management or Apache configuration within Virtualmin might be contributing to inefficient handling of these repeated operations. However, I’m not entirely sure whether the issue is CPU-bound, memory-related, or tied to process spawning limits.

I have already attempted some basic optimizations, such as enabling OPcache, increasing memory limits, and tweaking PHP-FPM pool settings like pm.max_children and pm.start_servers. While these changes resulted in slight improvements, they did not resolve the core latency problem. Monitoring tools show spikes in CPU usage during peak requests, but it’s unclear whether this is due to inefficient code execution or server-level misconfiguration. Additionally, I’ve noticed that the server load average increases disproportionately compared to the actual number of incoming requests.

Another observation is that repeated requests for similar input do not benefit from any form of caching. Since Morse code translation is deterministic, I was expecting better performance if results could somehow be reused. However, in the current setup, every request is processed from scratch. I’ve considered implementing application-level caching using something like Redis or Memcached, but I’m unsure how well these integrate within a Virtualmin environment or whether they would significantly reduce the overhead in this case.

I’m also questioning whether the architecture itself is contributing to the problem. Using PHP for this type of real-time processing might not be the most efficient choice, especially under concurrent load. I’ve read that event-driven environments like Node.js or asynchronous Python frameworks can handle such scenarios more gracefully. That said, migrating the entire service would be a considerable effort, so I’d prefer to first explore whether the current stack can be optimized effectively within Virtualmin.

At this point, I’m looking for guidance on how to properly diagnose and resolve this performance bottleneck. Are there specific Virtualmin, Apache, or PHP-FPM tuning strategies that are particularly effective for handling high-frequency, low-complexity requests like this? Would introducing caching layers or persistent data structures make a noticeable difference? Or is it more practical to rethink the implementation approach altogether? Any insights or recommendations from those who have dealt with similar performance issues in a Virtualmin setup would be extremely helpful. Very sorry for long post!

Your suspicion about Virtualmin is ill-founded. You should imagine Virtualmin as a GUI which assist you in configuring the server. Of all the various panels that are available, Webmin + Virtualmin contribute least towards ineffencies.

You are on the right track. PHP is very good for a large number of quick tasks. You can serve lots of users if PHP finishes tasks quickly but if PHP is waiting for a long time (a few seconds) for a task to complete then the number of users you can hope to serve is down to a handful.

So you could be right about the architecture issue. Your architecture must be created (or modified) to free up PHP quickly, even if the actual heavy lifting is being done by some other process, and then engage again with php when the task is done.

That’s all I can say in defence of Virtualmin and PHP. If you use Node or python then the problems are different.

Also, is your PHP program fully PHP or does it involve a database. If it does you should make sure it is using INNODB and not MyISAM which locks rows and tables while it performs reads.

Also run phpinfo() and make sure opcache is installed for your version of PHP.

Who is your VPS provider. Not all provide what they promise and what you pay for. Some are known. Some are not.

What are the CPU and Memory specs? Are you running other services or is this strictly for this web app?

What processes have spiking CPU load? What are they doing when they spike?

My usual recommendation is:

  1. Check top or htop for what processes are the problem.
  2. Check the error log and the PHP log for the domain for obvious problems…e.g. timeouts to the database or an API or something.
  3. If you’re using a database, enable slow query logging, and find the most offensive queries…fix those with indexes or refactoring or both. Slow Query Log Overview | Server | MariaDB Documentation
  4. Profile your application. Maybe just add better logging so you know roughly what your app is doing when things get slow. https://xdebug.org/

Don’t try to tune before you even know what’s slow.

Also, Virtualmin is not involved in serving requests at all. You can literally stop Webmin/Virtualmin and your apps/sites will continue as if nothing happened (because nothing did happen…Virtualmin is not in the request processing path).

Apache is faster than any PHP app by several orders of magnitude. If you’re tweaking Apache, you’re probably wasting your time.

For PHP, unless the values are pathological (crazy high for the hardware, for instance), adjusting start_servers or max_children is only going to have a marginal effect on performance.

Generally speaking, the application is nearly always the weak link.

Virtualmin is not your OS. You can run Redis (or Valkey, the still open source alternative to Redis) or memcached.

If your app is doing a simple key->value lookup, then a key/value cache might help (but given your description of the app, I don’t think it would…I think you’d just be adding another layer of latency to a bad algorithm). You could also implement memoization or other things in your app, as well. Many possibilities. But, I think you need to understand the problem before you start throwing solutions around, and you don’t yet understand the problem.

How big is “relatively large”? I can’t imagine it’s all that big. There are only 26 letters, right? Wikipedia says there’s 26 letters and 10 numerals. A hash with 36 key/values is not large. It is very very small, as data goes. Practically nothing. So, what’s in the large hash, if not that?

Anyway, I think you have a coding problem, not a server configuration problem. Based on your description of the problem, you have an embarrassingly parallel task that is also very, very, small. Any modern system should be able to do millions of those. So, if it’s bogging down…I dunno. We’d need to see the hot spots in your code, in order to make suggestions.

If the data set really is just a 36 entry associative array, then you should not be thinking about adding more software to the processing. That fits in memory. And, I don’t mean “barely fits in memory”, I mean you easily could fit a million copies in memory on modern hardware. I’m sure you don’t have a million simultaneous users.

2 Likes

When I checked briefly with top, I did notice that PHP-FPM worker processes were the ones spiking CPU during concurrent requests, but I haven’t yet drilled down into what they’re doing at that exact moment. I also haven’t enabled any proper profiling yet, so I’m essentially guessing based on symptoms rather than actual data. I’ll go ahead and set up Xdebug or at least add more granular logging around the translation logic to see where time is actually being spent.

Also, your point about the “large” associative array is fair in reality, the core Morse mapping itself is small (letters, numbers, a few symbols). I think the inefficiency might actually be coming from how it’s being initialized or used per request, or possibly from surrounding logic (string parsing, repeated transformations, or even unnecessary function calls). I’ll take a closer look at whether anything is being rebuilt or processed redundantly on each request.

Based on your advice, I’ll pause on tweaking Apache/PHP-FPM further and instead focus on profiling and identifying hotspots in the code first. Once I have clearer data on where the bottleneck is (CPU time, memory, or something else), I’ll be in a much better position to decide whether optimization, memoization, or architectural changes are actually needed. Appreciate the reality check it’s a good reminder not to optimize blindly.

1 Like

You might use something like this already but, I use these 2 variable to workout RAM and time spent in various place of code

// Save the start time and memory usage.
$startTime = microtime(1);
$startMem = memory_get_usage();

Put them at the beginning of the script and then compare at various points etc..