19

As discussed in Understanding UNIX permissions and file types, each file has permission settings ("file mode") for:

  • the owner / user ("u"),
  • the owner's group ("g"), and
  • everyone else ("o").

As far as I understand, the owner of a file can always change the file's permissions using chmod. So can any application running under the owner.

What is the reason for restricting the owner's own permissions if they can always change them?

The only use I can see is the protection from accidental deletion or execution, which can be easily overcome if intended.


A related question has been asked here: Is there a reason why 'owner' permissions exist? Aren't group permissions enough? It discusses why the owner's permissions cannot be replaced by a dummy group consisting of a single user (the owner). In contrast, here I am asking about the purpose of having permissions for the owner in principle, no matter if they are implemented through a separate "u" octal or a separate group + ACLs.

3

4 Answers 4

51

There are various reasons to reduce the owner's permissions (though rarely to less than that of the group).

  • The most common is not having execute permission on files not intended to be executed. Quite often, shell scripts are fragments intended to be sourced from other scripts (e.g. your .profile) and don't make sense as top-level processes. Command completion will only offer executable files, so correct permissions helps in interactive shells.

  • Accidentally overwriting a file is a substantial risk - it can happen through mistyping a command, or even more easily in GUI programs. One of the first things I do when copying files from my camera is to make them (and their containing directory) non-writeable, so that any edits I make must be copies, rather than overwriting the original.

  • Sometimes it's important that files are not even readable. If I upgrade my Emacs and have problems with local packages in my ~/lisp directory, I selectively disable them (with chmod -r) until it can start up successfully; then I can make them readable one at a time as I fix compatibility problems.

A correct set of permissions for user indicates intentionality. Although the user can change permissions, well-behaved programs won't do that (at least, not without asking first). Instead of thinking of the permissions as restricting the user, think of them as restricting what the user's processes can do at a given point in time.

6
  • Instead of thinking about the user themself doing anything, one should understand that substantially all user interactions with the system are mediated and embodied by the user's processes or processes operating on the user's behalf, including, but not limited to, text-mode and / or graphical shells. For most intents and purposes, it doesn't make sense to try to distinguish between a user and their processes. Commented Apr 18, 2022 at 19:37
  • 5
    That's true, @John. But it seems helpful to consider that the user (a person) might not fully understand the workings of each process that they run, and therefore might want to protect against unintended consequences thereof. There's a bunch of other good answers, so I don't see a great need to update this answer too much. Commented Apr 19, 2022 at 8:25
  • @JohnBollinger the distinction is important in the context of security: a process such as a web browser is more than just a representation of what the user wants, e.g. in the case of a vulnerability being exploited it might execute code injected by another person.
    – Ruslan
    Commented Apr 21, 2022 at 9:17
  • For a specific example, I was once compiling a set of photographs to exhibit. I started with a directory full of symlinks and used a GUI image viewer to delete the ones I didn't want. Unknown to me, the GUI tool was deleting the target of each symlink, rather than the link itself. I ended up having to restore these files from the backups. As a user, I have greater confidence in simple tools such as chmod to do what I expect. Commented Apr 21, 2022 at 10:16
  • 1
    My remarks seem to have been taken in a different light than I intended. My point is not to object to this answer's explanation. In fact, I quite like its characterization of user permissions in terms of intentionality. My point is that partitioning actions between what users do themselves and what their processes do on their behalf is artificial, misleading, and not particularly useful. Indeed, that misunderstanding bears directly on the question. Commented Apr 21, 2022 at 13:40
20

You seem to be missing a rather important point here: Well behaved processes don’t go around modifying the permissions of files they have access to. ls won’t randomly make a directory you point it at readable just so it can list the directory contents. ssh won’t ‘fix’ the permissions on ~/.ssh or the files it contains if they are wrong, it will just refuse to run. And it’s generally safe to assume that any program you are likely to use is well behaved in this manner.

This means that what permissions are set on a given file for the owner are usually honored (unless you’re the root user or in some other way are able to short-circuit the DAC checks (such as having CAP_DAC_OVERRIDE on Linux), because sane programs just trust the kernel to check permissions), and therefore it is generally useful to protect a given file against accidental modification or execution. And while this can be relatively easily overcome, the user has to explicitly do something to overcome it. IOW, it functions as yet another confirmation step to indicate that ‘Yes, I really do want to do this.’.

More generically though, because owner permissions are usually honored, there are numerous useful things you can do with them:

  • Make files or directories read-only (equivalent to setting the ‘Read-Only’ attribute on Windows).
  • Mark files as not being executable (or XDG .desktop files as untrusted).
  • Functionally ‘hide’ the contents of directories (by marking the directory as not readable) or files. This is actually very useful when debugging issues with plugins for some applications, because most applications that use per-plugin directories or files act as if the plugin is not there if its files are not readable.
1
18

Most of the time I do this to prevent against accidental deletion/modification, as you suggested. Sometimes, however, I do it so that I can perform batch modifications on all the files/directories in a certain tree except the ones I've "protected".

3
  • As a Windows user I routinely do this by setting the read-only attribute attrib +r myfile. Google tells me that file attributes exist in unix, and while there's no exact equivalent you could use chattr +i or chattr +u to achieve the same result. On unix why would you use chmod instead of chattr? (If this isn't a one liner I could post this as a new question.) Commented Apr 18, 2022 at 9:40
  • 7
    @JohnRennie File attributes are extra flags outside of the normal UNIX DAC permissions model, are not always available, do not always actually do anything (chattr +u for example is not actually supported on most Linux filesystems), may not behave as expected (for example, on some filesystems certain attributes only do anything if changed while there is no data in the file) and often can only be changed by the root user. Commented Apr 18, 2022 at 11:32
  • @AustinHemmelgarn Thanks :-) Commented Apr 18, 2022 at 11:52
11

Restricted owner permissions are useful in restricted environments, where the user doesn't have access to tools that change permissions.

The classic example is anonymous FTP servers. You can create a "dropbox" directory where the owner has write permissions but not read permissions. This allows anonymous users to upload files, but not list the files that other users have uploaded. Meanwhile, the directory would be readable by its group, so we would put users who are allowed to retrieve from the directory in that group. If the FTP server doesn't provide a chmod command, the anonymous users can't override this and give themselves permission to list the directory.

6
  • If the idea is to prevent files from being passed before checked by a trusted user, that nonreadable directory isn't enough, but they could make the FTP server create the files with 0000 permissions. Though I guess nowadays we'd probably just have access checks within the server itself, instead of being tied to the limits of kernel permissions. (And the server could then just deny requests to read from the upload directory even if it was itself allowed to do so by the OS.)
    – ilkkachu
    Commented Apr 18, 2022 at 13:50
  • True, this was how it was done originally, before FTP servers with lots of configuration options were written. It just leveraged the existing permission features.
    – Barmar
    Commented Apr 18, 2022 at 13:59
  • If the file has 000 permissions, how will the readers read them? We'd need to give them a set-uid tool to retrieve them.
    – Barmar
    Commented Apr 18, 2022 at 14:01
  • Indeed, this wasn't a complete solution. It was common for software/media pirates to use anonymous FTP servers as distribution points by using well known filenames.
    – Barmar
    Commented Apr 18, 2022 at 14:26
  • 1
    Anyway, the question seems to be phrased so as to second-guess the current design (such as it is), so that also triggered my playing devil's advocate here: Restricting the user's permissions on an FTP server still doesn't make necessary to be able to do that for all files on the kernel level, as the FTP server could just apply the relevant restrictions itself... :) It's a(n imperfect) use-case for the flexibility the existing system provides, but not one that makes the existing system the only possible one.
    – ilkkachu
    Commented Apr 18, 2022 at 14:34

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .