2

I'm hosting several WordPress sites on a LEMP VPS.

  • Each site has its own Wordpress install (located under /srv/http/domain/wordpress), its own user account (site-user) and its own php-fpm pool running as site-user.
  • Nginx is running under the http account and all files under /srv/http/domain/wordpress have site-user:http ownership with 0640 (files) and 2750 (directories) permissions.

So, each user has full RW access to their own files/directories, but no access to other site's files/directories. Nginx has R-only access to every site's files/directories.

I'm using the setgid bit to force new directories created under /srv/http/domain/wordpress to inherit the http group so Nginx has read access to these. This works as expected when files are created via SFTP.

This seems sane to me and I can update plugins, themes and WordPress successfully through the wp-admin interface. However, anytime I execute a task that requires WordPress to create new files/directories -- basically anytime I install or update a plugin/theme or update Wordpress itself -- the resulting file ownership is site-user:site-user. What I'd like is to have the newly created files/directories respect the setgid bit and create files/directories with site-user:http ownership so Nginx can read these files without my intervention.

I set the following in each site's wp-config.php file:

  • define( 'FS_CHMOD_DIR', ( 02750 & ~ umask() ) );
  • define( 'FS_CHMOD_FILE', ( 0640 & ~ umask() ) );

This forces the permissions to what I want but doesn't have any effect on ownership.

I've read this and this but the "solution" requires modifying PHP code to avoid using move_uploaded_file() which is fine if you're writing your own PHP code, but I'm not going to start modifying WordPress PHP files. I'm not even certain this is the source of my issue.

Also, I understand I could loosen permissions and set them to 755 (directories) and 644 (files) so that Nginx could read files regardless of whether it had group ownership over them, but for security reasons I'm trying to avoid world-readable files on a shared server.

How to I control ownership of files created by WordPress/PHP?

4
  • Please read the wordpress tag you included. You are asking an off-topic question. See On Topic. Questions about wordpress.com belong on Web Applications. Questions about installing and maintaining WordPress belong on WordPress Development
    – DavidPostill
    Commented Nov 24, 2017 at 22:11
  • This is arkward. I'm reading this as a filesystem/permissions issue. Possibly to do with the webserver. I'm reopening for now, Having talk with @DavidPostill over how to proceed then if its off topic here, move it to the appropriate site.
    – Journeyman Geek
    Commented Nov 25, 2017 at 0:24
  • @JakeGould - Nginx worker threads are running under http:http user/group as stated in the original post. The issue is making files newly-created by Wordpress (actually PHP) to have http group ownership so Nginx can read them.
    – user826076
    Commented Nov 25, 2017 at 15:08
  • @DavidPostill - Apologies David. I don't actually think this is a Wordpress-specific issue, but more to do with PHP and how to setup your LEMP stack in general, including Linux file permissions.
    – user826076
    Commented Nov 25, 2017 at 15:09

3 Answers 3

2

I believe I have the same setup for a shared webhosting:

  • the folder /var/www contains subfolders with projects
  • each project folder contains various different folders, one of them is the folder holding the content published online (that is, accessible by Apache and routed under project's domain(s)) - the /var/www/<project>/htdocs folder
  • each PHP site runs as PHP-FPM under its own user ID and group ID (user ID = group ID, a number between 10000-65535); those user IDs and group IDs don't exist in the /etc/passwd as the Kernel does not care about it and it simplifies the setup
  • the PHP process runs with the umask 0027
  • the htdocs folder is setup with permissions 02750, owned by the PHP user and the Apache group; all folders inside htdocs are 02750 and all files are 0640 (that is, 02750 without executable and setgid bits)
  • apart from the PHP process the htdocs folder can be written to only by the SFTP process that runs under the same permissions as the PHP user

Unless anyone manually changes the permissions, this ensures the permissions to be set correctly, both when created from PHP or uploaded through SFTP.

To solve it also for web file uploads, my solution is this:

  • there's another folder in the project folder /var/www/<project>/tmp to hold the temporary files for the site; this is owned by the PHP user and PHP group as Apache should never have access in there
  • inside this folder, I have created a /var/www/<project>/tmp/upload folder, owned by the PHP user and the Apache group and 02750 permissions; this is not to give Apache access, but to ensure the correct group on the uploaded file; but since it also gives access to Apache, I have decided to create a separate folder for this and not to use the /var/www/<project>/tmp for file uploads; this does not decrease the security as the uploaded file will be after calling move_uploaded_file() group-owned by Apache anyway
  • the PHP is configured accordingly to use /var/www/<project>/tmp/upload for uploads and /var/www/<project>/tmp for all other temporary files

When a file is uploaded, these are the permissions of the temporary file upload and the target file (created with move_uploaded_file()):

Filename: /var/www/<project>/tmp/upload/phphEfMeB
User: 10000
Group: 33
Permissions: 0100600
Umask: 0027

Target file: 
Filename: test/test-upload-1666022534
User: 10000
Group: 33
Permissions: 0100640
Umask: 0027

As you can see, PHP first creates the file with 0600 permissions but the correct (Apache's) group and the move_uploaded_file() keeps the group but changes the permissions to 0640.

I am using this approach as none of the two solutions above fit my needs, because:

  1. Same as described above, I too want to avoid a PHP process having access to all the files in the shared webhosting
  2. Adding Apache to all the PHP groups would require me to "publish" them in the /etc/passwd, not to mention that it would be a long list of groups for the Apache user. More importantly though, that would mean that Apache would have access to everything in the PHP group, which also decreases security. There are files created by PHP (e.g. in "home" folder or "tmp" files) that does not have to be accessible by Apache. However, with this approach they will be (unless the app will remove the group permissions from the files).

Now, the above setup of mine still requires the aforementioned settings in the wp-config.php file:

define('FS_CHMOD_DIR', 02750 & ~umask());
define('FS_CHMOD_FILE', 0640 & ~umask());

The defaults in Wordpress are otherwise set like this:

# ABSPATH is the root of the Wordpress installation
define( 'FS_CHMOD_DIR', ( fileperms( ABSPATH ) & 0777 | 0755 ) );                                    
define( 'FS_CHMOD_FILE', ( fileperms( ABSPATH . 'index.php' ) & 0777 | 0644 ) );

This pretty much means "minimum 0755" and "maximum 0777" permissions as set on the Wordpress root folder, resulting in removing setgid from the new folders and making everything world readable.

Hence, without explicitly setting the values for the constants, this is what happens in Wordpress:

  • the first time Wordpress creates a folder, it changes the permissions, removing the setgid and making it world-readable
  • the next time it runs and writes into the affected folders, they no longer have setgid bit set and as such the files created inside are not accessible by Apache (and are world-readable)

If anyone's interested, here's a few Docker images using aforementioned approach:

https://gitlab.com/craynic.com/craynic.net/mvh/

It's designed to work with MongoDB as the source of information, supports XFS quotas and runs on Kubernetes, hence being split into a few containers.

The initializer container prepares the directory structure (incl. XFS quotas), as well as Apache and PHP configuration files during "init" of the K8s pod, and later watches for changes in MongoDB to update the configuration. All other containers reload on configuration change.

1
  • BTW despite all that, some plugins and some parts of Wordpress still change permissions to - in my opinion - "insane default" - that is word readable. I will try to find out more and comment here. Commented Oct 17, 2022 at 20:18
1

I've found a couple of solutions to this:

  1. New files created by Wordpress (actually PHP) are dicated by the user an group directives in the php-fpm pool configuration file under which PHP executes. So to force newly created files to site-user:http ownership as I was originally hoping you could just set user=site-user and group=http in your site-specific php-fpm pool configuration file. This works in that newly created files have the group ownership I was looking for. However, from a security point of view this partly defeats the purpose of create separate php-fpm pools for each site as any site-user could potentially create PHP files that would have read access to another site's files/directories.

  2. Simpler, and more secure than #1 above is to have all of a site's files/directories owned by site-user:site-user (instead of site-user:http as I had originally planned), and then add the http user to the site-user group. With 640 (files) and 750 (folders) permissions on all sites, this effectively gives nginx read-only access to every site's files as required, while still disallowing any user to read any other site's files except their own.

Option #2 above doesn't require any use of the setgid bit which simplifies things but it does require adding the following in your wp-config.php file:

  • define( 'FS_CHMOD_DIR', ( 0750 & ~ umask() ) );
  • define( 'FS_CHMOD_FILE', ( 0640 & ~ umask() ) );

Without the above lines WordPress/PHP will create world-readable files (644) and folders (755).

0
1

I continued my research on how to best solve the problem with permissions and believe I have found a better way than the ones suggested above by me.

I am adding this as a second answer, so that you can read details of my previous investigation.

Definitions common to all the solutions below

First, let me start with a few common definitions:

  • webserver - Apache or nginx; unless specified otherwise in parts of it, this answer applies equally to both LAMP and LEMP stacks
  • site - a project running/accessible under its own user; in Apache represented as one virtual host; one webserver runs multiple sites
  • site user - user having access to the site; both PHP and SFTP processes are running as a site user
  • site-user, site-group - the name of the system user and group of the site user
  • ws-group - group the webserver runs under
  • public files - files intended to be accessible by the webserver, potentially published to the internet
  • private files - files that must not be published to the internet and should not be accessible by the webserver

Problem statement, refined

How to set up permissions when the webserver is used to host multiple sites, so that:

  • each site runs as a separate user using PHP-FPM
  • each site user can see only his own files
  • site files are accessible through SFTP
  • the webserver can see all files
  • all that works with file uploads
  • all that works with Wordpress - however, this problem is not Wordpress-specific, as mentioned in one of the comments above, but it concerns any application that manipulates with filesystem permissions

For my use, I have extended the problem statement with these nice-to-haves:

  • Each site can have public and private files; the webserver can see only public files (that is, not everything). PHP and SFTP should have full access to both public and private files.
  • There's a directory for private files in place; files are made private by merely being placed in this directory. This is to minimise the requirements on the webmasters and their applications on how to make files private. Specifically, the need for using chmod() with a a concrete permissions constant to make files private should be avoided.
  • Usage of chmod() and other permission-related operations should not be limited in PHP, e.g. by replacing them with stub functions.

Directory structure of a site

The directory structure for an example site used throughout this answer is as follows:

drwxr-x---+  root      site-group   /             # site root
drwxr-x---   site-user site-group   /home         # private files
drwxr-x---+  site-user site-group   /htdocs       # public files
drwxr-x---   root      site-group   /logs         # logs root
drwxr-S---   root      site-group   /logs/ws      # webserver logs
drwxr-x---   site-user site-group   /logs/php     # PHP logs
drwxr-x---   root      site-group   /tmp          # tmp files root
drwxr-x---   site-user site-group   /tmp/php      # PHP tmp files
drwxr-x---+  site-user site-group   /tmp/upload   # uploaded files

Notes:

  • root-owned directories (site root, logs root, tmp files root) are to make sure that nobody can create or remove subdirectories or mess with their permissions; specifically, this makes sure that e.g. the site user won't be able to delete the /htdocs directory, or he won't be able to give access to all users
  • the + symbol indicates filesystem ACLs; the concrete ACLs used are discussed in the "ACL" solution below and are not relevant to other solutions
  • the /logs/ws directory has permissions designed specifically for Apache; Apache opens its logs on startup before dropping permissions, therefore the directory can be root-owned; the setgid bit is set there to make sure the site-group sticks, so that the site user has read (and not write!) access to the logs
  • Both PHP and SFTP processes run with umask 0027

Such a structure is different from the one in the original question, but it's equivalent with regards to the solutions described below - they will apply equally to the original question.

The solutions

The setgid approach

As the author of the question found out, one way how to address this is to have the whole site being owned by the site user, except for the /htdocs directory being group-owned by the ws-group.

This way, site user (PHP, SFTP) has access everywhere and the webserver only to the /htdocs directory.

However, since it's not desirable for the site user to belong to the ws-group (as discussed below in the "shared group" solution), the files newly created by the site user in the /htdocs directory would not be accessible by the ws-group, that is not accessible by the webserver.

The solution to ensure the group ownership also for new files is to set the setgid on the whole /htdocs directory recursively:

chmod -R g+s -- ./htdocs

That comes with a few challenges:

  1. As asked in the original question, that does not work well with file uploads.
  2. It's very fragile - the Wordpress (or any other application) can easily destroy the setgid flag by a wrong configuration or accident.
  3. The ws-group will be set to all uploaded files, even if they were not aimed at being placed in the public files directory. Ideally, the group should be applied only when moving to the /htdocs directory.

1. File uploads

When a PHP uploads a file, it puts the file into the temporary directory as configured by the upload-tmp-dir configuration directive. Since this directory is almost certainly outside of /htdocs, the setgid won't take effect. The file won't get the ws-group and move_uploaded_file() function in PHP won't change that fact. TLDR: file won't be accessible by the webserver.

The solution to this problem is simple: create a dedicated "upload" directory (as suggested above in the example directory structure and using the aforementioned PHP configuration directive upload-tmp-dir), make it group owned by the ws-group and apply the setgid there too. This is secure enough - the directory will be used only for PHP uploads, other temporary files will be placed elsewhere. And it will make the files group-owned by ws-group.

2. Fragility

setgid is extremely fragile - any mkdir() or chmod() operation can easily destroy it and Wordpress is full of such calls.

Even if you change the behaviour of Wordpress using the FS_CHMOD_DIR and FS_CHMOD_FILE configuration constants, some plugins will still ignore this and change the mode to "more secure" 0400 or so - efectively turning off setgid.

While I believe Wordpress plugins should NOT assume how to secure files on the server, it is what it is and this approach can't be used in a stable way.

I've also witnessed similar behaviour with some other frameworks.

Shared group for all PHP processes

Another solution suggested above is to put all PHP processes into the ws-group.

This will work, but as the author adds, it's a security issue - all PHP processes would be able to read data across all sites and there's not much to do to stop this: open_basedir is not bullet-proof and chroot is tricky to setup.

As such, this solution is not acceptable for the security concerns.

Adding the webserver to all site groups

The other way round could be to add the webserver to all site groups. This way, PHP processes would see only their own files, but the webserver could see everything.

This is not matching my extended requirements as all PHP's files, including private files, would be visible to the webserver unless their access mode would be restricted to owner only. It's not possible to rely on the webmasters to be disciplined enough and/or to understand deeply how linux file system permissions work.

A bigger problem with this solution, however, was that I couldn't make Apache member of more than 30 groups. At least in the Alpine distribution, once having added Apache to more than 30 groups, the following error appeared:

initgroups: unable to set groups for User apache...

As a result, Apache was not added to any groups.

Even if I'd be able to solve it, another problem is that the webserver would have to monitor for changes in sites and dynamically add/remove itself to/from site groups. At least in Apache, that also requires Apache to restart (not only reload).

Too clumsy, so I decided to look further.

Using mpm-itk Apache's module

One interesting Apache-only solution popped up: mpm-itk module.

This module allows each Apache's virtual host to run as a separate user.

More about it can be found on the module homepage and e.g. also in this StackOverflow answer.

The downside of this solution is that it's not bundled in the Alpine distribution since Apache version 2.4 and I would have to either switch to e.g. Ubuntu (where it's still present), or to compile it myself.

Another downside is that it gives Apache access to everything that is accessible by the PHP group. Similarly to the previous solution, webmasters would have to make sure that private files are not group-accessible.

Using filesystem ACLs

Finally, I came across a solution with filesystem ACLs.

Essentially, it's very similar to the setgid approach: it allows for more fine-grained access control and allows to give selected system groups access to directories/files even if they don't own or group-own them.

My solution is to use it as follows:

setfacl       -m "g:ws-group:x"  -- "./"
setfacl -d -R -m "g:ws-group:rX" -- "./htdocs"
setfacl    -R -m "g:ws-group:rX" -- "./htdocs"

The first command gives "search" permission to the webserver to the site root. Without it, the webserver won't be able to access anything.

The second command sets the default ACLs (applied to all new directories or files) to the /htdocs directory recursively and the third one adds the ACLs to already existing contents of the directory.

Similarly to the setgid approach, this also needs to be set on the temporary directory for uploaded files:

setfacl -d -m "g:ws-group:r" -- "./tmp/upload"

While it's effectively the same as the setgid solution, the advantage of this approach is that the ACLs are not as fragile as setgid and chmod() or mkdir() operations do not affect them.

Server administrators can easily control which directories are by default public and which are private.

Webmasters can choose, whether in order to have some files not published, whether to use the private files directory and to skip the troubles understanding linux filesystem permissions model, or if they want to manually control permissions by removing group access to selected files. As opposed to setgid, if you remove group access to a directory/folder, the ACLs stay. That means, by granting group access back, the ACLs will become effective again.

This approach fits all my needs and is my solution of choice.

Closing thoughts

The filesystem permissions seem to be the current interface on which application controls access to files.

I'm deeply convinced that's wrong, as that leads to application authors taking various assumptions about the hosting, such as:

  • "If I remove the access to a file from group and others, it won't be accessible publicly."
  • "In order to make a file publicly accessible, I need to give it access mode of 0666 (or 0444)."

Both of these assumptions might be wrong. But even if they would be true, the existence of this SuperUser question shows how things can easily go wrong:

  • The application creates a directory and makes it private by assigning access mode 0700.
  • The application creates files in the new directory.
  • Later on, the application decides to make the access public by granting group access (chmod -R g+rX dir).
  • Not knowing that the removed setgid was needed, the files are not in the correct group and are thus still not accessible.
  • The application will then decide to make the files readable by all users (chmod -R go+rX dir).
  • Now, sensitive data might be exposed to other users of the system.

Of course, a solution is for the hosting provider to declare guarantees and requirements and the applications to follow it. While I essentially agree with that, I am suggesting, that the current guarantees and requirements are too vague or too complex to be used reliably and securely enough.

In my opinion, hosted applications should not mess with the underlying filesystem permissions at all. That's the job of the infrastructure. That's why I suggest hosting offering "public" resp. "private" directories and making it the job of the underlying infrastructure to grant resp. revoke public access to them.

That certainly also demands webmasters to have certain knowledge of the infrastructure, but it's a very simple concept that can be easily agreed upon. It's also way easier to implement, I would argue it's easier than to have a semi-complex logic around filesystem permissions, as can be seen on the Wordpress example.

I am aware that this does not apply well to bigger applications where the infrastructure will be tuned to their needs. However, there's still a plenty of small applications around the world that should adhere to the requirements of mass-virtual hostings, as it does not pay off for hosting providers to run each container in a separate container.

2
  • I don't know if you are still active on the platform, but man being just a web dev and trying to learn basic server setup. The permissions were the pita at the end. AFter 5 years your answer actually explained what in the actual is going on. The upload_temp_directory path makes total sense now. FUCK. I'm losing my mind on this. Thank you. after 5 years you saved my sanity. ACL makes the best solution. being trying this, it messes up here and there as I have not still fully grasped this. If you try any fucking AI to understand the issue it just spits nonesense. Thank you Commented Feb 25 at 13:37
  • 1
    mv command even the language API i.e. php's move_uploaded_file retains permissions, on the other hand, cp doesn't. unix.stackexchange.com/a/149798 Commented Feb 25 at 13:46

You must log in to answer this question.