PHP-FPM is semi-broken, sort of...

Hi Everyone,

I just wanted to point out an issue I ran into that probably affects everyone in some circumstances, even if they haven’t spotted it yet :slight_smile:

The implementation of PHP-FPM has a flaw in that it does not allow mod_rewrite to operate if a URL ends with php, with or without a period.

For example, creating a WordPress post with the permalink ‘/i-love-php/’ will result in a 404, as it’s matched by the ProxyPassMatch rule in the vhost before mod_rewrite has a chance to do anything useful with it.

I’m not sure if there’s a way to solve this completely while still using ProxyPassMatch, but at the very least the regex could be tidied up to escape the period preceeding ‘php’ so it only matches ‘.php’ and not ‘[anything at all]php’.

Here’s a copy of the regex for reference:
^/(..php(/.)?)$

As an aside, I’m not familiar with the pros/cons of PHP-FPM via ProxyPassMatch vs. FastCgiExternalServer… what’s the rationale?

Kind Regards,
Michael

That’s a good point, I’ll talk to Jamie about it. Looking what we’re generating there, I don’t see any reason other than “typo” for that regex not being more specific. So, yeah, I think it’s a bug.

Our choice of how to deploy is mostly based on what is: Widely available without additional modules, stable, reasonably performant, reasonably simple to configure, good resource usage in a shared hosting environment, secure in a shared hosting environment, well-maintained and preferably in core Apache.

We’re using basically this configuration: https://wiki.apache.org/httpd/PHP-FPM

Which seems to be widely adopted and accepted as “good” on a wide variety of metrics. So, we are communicating via FastCGI protocol on systems that have it. It ticks off all the boxes above. FastCgiExternalServer requires a non-core Apache module that is questionably maintained. I don’t think there are any performance benefits. We chose mod_fcgid back when we were deciding between mod_fastcgi and mod_fcgid as a method of running PHP because it was in core and seemed better maintained…those reasons still hold, and mod_fastcgi is even less interesting now that we have mod_proxy_fcgi and PHP-FPM.

In short: AFAIK, what we are shipping is the gold standard for running PHP applications fast, efficiently, and securely. If that’s not the case, somebody needs to tell the PHP and Apache folks, because they believe this is the right way, too! :wink:

Actually, it looks like Jamie has already fixed this in github:

For old style sockets: https://github.com/virtualmin/virtualmin-gpl/blob/master/php-lib.pl#L359

And:

And for fcgi: https://github.com/virtualmin/virtualmin-gpl/blob/master/php-lib.pl#L364

Pretty sure that’ll do what we want.

Hi Joe,

Thanks for the response, I appreciate you taking the time :slight_smile:

I’ve been doing some testing on this and, not to sound like an alarmist… but it would appear that a ProxyPassMatch based implementation is dangerously flawed.

The implications are larger than what I thought before. The implementation results in the contents of the .htaccess file being ignored (as the request is proxied to PHP-FPM and Apache never processes the other configuration).

Here’s an example to illustrate:

/bad.php:

<?php echo 'Bad things happened'; ?>

/.htaccess:

<Files “bad.php”>
Order Allow,Deny
Deny from all

Results:

mod_php: 403 Access Denied
fcgid: 403 Access Denied
php-fpm (ProxyPassMatch): 200 Bad things happened

In short - there is a large, dangerous security issue when using the current implementation.

I’m not sure what the best solution to this is, but I think there’s a serious problem here!

Kind Regards,
Michael Thomas

So, it’s not that htaccess is being skipped (you can test it by blocking access to any other kind of file, or doing anything else you’d do in htaccess). It gets processed, but for php files, the proxy rule overrides that.

The security implications don’t seem dire to me; I mean, I guess there could be situations where someone might use basic authentication or whatever to protect some PHP script, but most applications are session-authenticated and their config files and the like are usually built such that they aren’t exploitable even if they happen to be in a php file and happen to be accessible to the outside work (e.g. Drupal settings.php just sets variables…it has no output and while it could be executed, there’s no way to actually get anything useful out of it).

I’ve been googling to see if there’s some consensus on whether this is avoidable, but nothing much is coming up. This would, I think, be the behavior of any proxied execution environment (not just PHP). But, if you know of a way to alter the config such that even PHP files can be controlled by htaccess, we’d certainly consider changing the defaut config.

Edit: Actually, it looks like we can use SetHandler inside of a FilesMatch block like this:

SetHandler "proxy:fcgi://127.0.0.1:9000"

I’ll do some testing and talk to Jamie about whether we can roll this in, if it resolves the issue.

So, after a bit of research, it looks like this is a pretty new feature; Apache 2.4.10 and above, according to the docs, though it seems to have been backported in CentOS, as my C7 system works with this config.

I’ve opened a ticket for Jamie about it. I don’t think it can make it into 6.0.1 of the module which is coming in a day or two, as it’s gonna need some testing on all of our supported distros to figure out where we have to fall back to the old behavior, but it can probably make it into the next release after.

Hi Joe,

Thanks for responding.

Personally I do think it’s a fairly serious flaw, as in practice I’ve seen a fair number of web applications that are designed under the assumption that an .htaccess file will be processed prior to the request being handed off to PHP for execution.

Some examples I’ve seen off the top of my head:
Crude API ‘addons’ for existing software that restrict access to known ‘good’ IP addresses
View templates that are written under the assumption that they cannot be executed directly as there is a .htaccess file that would normally prevent that from occurring
htpasswd protected applications (as you pointed out)

A classic example would be a view template that can be used to compromise a site via reflected XSS as the template is operating under the assumption that input has already been sanitized (by the web application) and that the script cannot be executed directly (due to an htaccess file).

Putting content and application security aside, this also means, for example, that a site owner can no easily block an IP address of a known attacker (Eg. DoS) from using up resources as they normally would.

If a practical alternative isn’t available, perhaps it might be worth adding a warning message to the option so people are aware of the complete implications and can make an informed decision…?

Kind Regards,
Michael Thomas

As I mentioned above, we’re switching the behavior to using a SetHandler in a FileMatch block on systems that support it (it requires a quite new version of Apache, so CentOS 7, Ubuntu 16.04, and Debian 9, maybe 8, seem to be all we can use this tactic on).