232

Is it possible to tune a kernel parameter to allow a userland program to bind to port 80 and 443?

The reason I ask is I think its foolish to allow a privileged process to open a socket and listen. Anything that opens a socket and listens is high risk, and high risk applications should not be running as root.

I'd much rather try to figure out what unprivileged process is listening on port 80 rather than trying to remove malware that burrowed in with root privileges.

4
  • 20
    The long answer tho is yes.. so the short answer kind of should be yes too.
    – B T
    Commented Aug 27, 2014 at 21:42
  • 6
    The short answer is yes.
    – Jason C
    Commented Mar 21, 2015 at 21:17
  • 1
    Use noob's answer that uses iptables to redirect port traffic. Simplest solution by far, and easy to undo if necessary. Commented Mar 19, 2020 at 2:42
  • Another possibility: How do I use capsh? Note you need CAP_NET_BIND_SERVICE. Commented May 16, 2022 at 19:03

7 Answers 7

314

I'm not sure what the other answers and comments here are referring to. This is possible rather easily. There are two options, both which allow access to low-numbered ports without having to elevate the process to root:

Option 1: Use CAP_NET_BIND_SERVICE to grant low-numbered port access to a process:

With this you can grant permanent access to a specific binary to bind to low-numbered ports via the setcap command:

sudo setcap CAP_NET_BIND_SERVICE=+eip /path/to/binary

For more details on the e/i/p part, see cap_from_text.

After doing this, /path/to/binary will be able to bind to low-numbered ports. Note that you must use setcap on the binary itself rather than a symlink.

Option 2: Use authbind to grant one-time access, with finer user/group/port control:

The authbind (man page) tool exists precisely for this.

  1. Install authbind using your favorite package manager.

  2. Configure it to grant access to the relevant ports, e.g. to allow 80 and 443 from all users and groups:

    sudo touch /etc/authbind/byport/80
    sudo touch /etc/authbind/byport/443
    sudo chmod 777 /etc/authbind/byport/80
    sudo chmod 777 /etc/authbind/byport/443
    
  3. Now execute your command via authbind (optionally specifying --deep or other arguments, see the man page):

    authbind --deep /path/to/binary command line args
    

    E.g.

    authbind --deep java -jar SomeServer.jar
    

There are upsides and downsides to both of the above. Option 1 grants trust to the binary but provides no control over per-port access. Option 2 grants trust to the user/group and provides control over per-port access but older versions supported only IPv4 (since I originally wrote this, newer versions with IPv6 support were released).

17
  • 3
    Does it really need rwx permission?
    – matanox
    Commented Apr 23, 2016 at 17:42
  • 18
    Beware that, with setcap, if you overwrite the executable you grant privileges to (ex: do a rebuild) then it loses its privileged port status and you have to give it privileges again :|
    – rogerdpack
    Commented Oct 27, 2016 at 22:25
  • 3
    Something that I had to fiddle with; I was trying to run a sysv service, that runs a ruby executable that uses ruby. You need to give the setcap permission on the version- specific ruby executable, e.g. /usr/bin/ruby1.9.1 Commented Jan 25, 2017 at 19:31
  • 10
    I have my doubts that chmoding to 777 the byport files is the best idea. I've seen giving permisions ranging from 500 to 744. I would stuck to the most restrictive one that works for you.
    – Pere
    Commented May 9, 2017 at 9:09
  • 5
    IMPO you really shouldn't be giving access to "all users and groups". Instead, you should pick a trusted user that needs to run this and then chown the /etc/authbind/byport/80 and 443 files by that user and chmod them so that they are executable by that user and no-one else. Otherwise you're increasing your security risk, not decreasing it.
    – deltaray
    Commented Feb 18, 2021 at 14:39
58

I have a rather different approach. I wanted to use port 80 for a node.js server. I was unable to do it since Node.js was installed for a non-sudo user. I tried to use symlinks, but it didn't work for me.

Then I got to know that I can forward connections from one port to another port. So I started the server on port 3000 and set up a port forward from port 80 to port 3000.

This link provides the actual commands which can be used to do this. Here're the commands -

localhost/loopback

sudo iptables -t nat -I OUTPUT -p tcp -d 127.0.0.1 --dport 80 -j REDIRECT --to-ports 3000

external

sudo iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 3000

I have used the second command and it worked for me. So I think this is a middle ground for not allowing user-process to access the lower ports directly, but giving them access using port-forwarding.

6
  • nginx is a great option along these lines, too; easy to set up and very powerful.
    – Jason C
    Commented May 10, 2021 at 21:58
  • @JasonC I agree! It's more declarative and better supported and AFAIK nginx uses port-forward too.
    – noob
    Commented May 11, 2021 at 8:14
  • Quick question: I ran the second command a while back, and I'm looking to remove this, as I have written an automatic forwarding server/loadbalancer, and would like to deploy it under port 80.
    – J-Cake
    Commented May 25, 2021 at 9:48
  • 2
    Keep in mind you may need to bind to 0.0.0.0 instead of 127.0.0.1 for external traffic.
    – Soheil
    Commented Sep 21, 2021 at 2:52
  • As soon as I start firewalld service on my machine, the port forwarding stop working. Any suggestion on what might be happening?
    – Jay Joshi
    Commented Oct 21, 2022 at 0:20
42

Simplest solution : remove all privileged ports on linux

Works on ubuntu/debian :

#save configuration permanently
echo 'net.ipv4.ip_unprivileged_port_start=0' > /etc/sysctl.d/50-unprivileged-ports.conf
#apply conf
sysctl --system

(works well for VirtualBox with non-root account)

Now, be carefull about security because all users can bind all ports !

6
  • 2
    That's clever. One small nit: the configuration opens 80 and 443, but it also opens all the other ports. Relaxing permissions on the other ports may not be desired.
    – jww
    Commented Sep 13, 2019 at 13:36
  • Nice solution. I have used it for IPv6 and it is working perfectly. Here is what I've done: docs.google.com/document/d/e/…
    – Fernando
    Commented Apr 23, 2020 at 11:32
  • seems to be the simplest solution, however is there a way to only open 80 and 443 to a certain group?
    – mekb
    Commented Aug 26, 2021 at 2:45
  • allowing only 80 and 443 is not possible with this method. you can change the value to 80 but it will allow port range 80-1024 for non root users.
    – soleuu
    Commented Sep 16, 2021 at 13:39
  • 1
    > be careful about security because all users can bind all ports Can someone elaborate why this is bad? Commented Mar 20, 2022 at 16:01
41

Dale Hagglund is spot on. So I'm just going to say the same thing but in a different way, with some specifics and examples. ☺

The right thing to do in the Unix and Linux worlds is:

  • to have a small, simple, easily auditable, program that runs as the superuser and binds the listening socket;
  • to have another small, simple, easily auditable, program that drops privileges, spawned by the first program;
  • to have the meat of the service, in a separate third program, run under a non-superuser account and chain loaded by the second program, expecting to simply inherit an open file descriptor for the socket.

You have the wrong idea of where the high risk is. The high risk is in reading from the network and acting upon what is read not in the simple acts of opening a socket, binding it to a port, and calling listen(). It's the part of a service that does the actual communication that is the high risk. The parts that open, bind(), and listen(), and even (to an extent) the part that accepts(), are not the high risk and can be run under the aegis of the superuser. They don't use and act upon (with the exception of source IP addresses in the accept() case) data that are under the control of untrusted strangers over the network.

There are many ways of doing this.

inetd

As Dale Hagglund says, the old "network superserver" inetd does this. The account under which the service process is run is one of the columns in inetd.conf. It doesn't separate the listening part and the dropping privileges part into two separate programs, small and easily auditable, but it does separate off the main service code into a separate program, exec()ed in a service process that it spawns with an open file descriptor for the socket.

The difficulty of auditing isn't that much of a problem, as one only has to audit the one program. inetd's major problem is not auditing so much but is rather that it doesn't provide simple fine-grained runtime service control, compared to more recent tools.

UCSPI-TCP and daemontools

Daniel J. Bernstein's UCSPI-TCP and daemontools packages were designed to do this in conjunction. One can alternatively use Bruce Guenter's largely equivalent daemontools-encore toolset.

The program to open the socket file descriptor and bind to the privileged local port is tcpserver, from UCSPI-TCP. It does both the listen() and the accept().

tcpserver then spawns either a service program that drops root privileges itself (because the protocol being served involves starting out as the superuser and then "logging on", as is the case with, for example, an FTP or an SSH daemon) or setuidgid which is a self-contained small and easily auditable program that solely drops privileges and then chain loads to the service program proper (no part of which thus ever runs with superuser privileges, as is the case with, say, qmail-smtpd).

A service run script would thus be for example (this one for dummyidentd for providing null IDENT service):

#!/bin/sh -e
exec 2>&1
exec \
tcpserver 0 113 \
setuidgid nobody \
dummyidentd.pl

nosh

My nosh package is designed to do this. It has a small setuidgid utility, just like the others. One slight difference is that it's usable with systemd-style "LISTEN_FDS" services as well as with UCSPI-TCP services, so the traditional tcpserver program is replaced by two separate programs: tcp-socket-listen and tcp-socket-accept.

Again, single-purpose utilities spawn and chain load one another. One interesting quirk of the design is that one can drop superuser privileges after listen() but before even accept(). Here's a run script for qmail-smtpd that indeed does exactly that:

#!/bin/nosh
fdmove -c 2 1
clearenv --keep-path --keep-locale
envdir env/
softlimit -m 70000000
tcp-socket-listen --combine4and6 --backlog 2 ::0 smtp
setuidgid qmaild
sh -c 'exec \
tcp-socket-accept -v -l "${LOCAL:-0}" -c "${MAXSMTPD:-1}" \
ucspi-socket-rules-check \
qmail-smtpd \
'

The programs that run under the aegis of the superuser are the small service-agnostic chain-loading tools fdmove, clearenv, envdir, softlimit, tcp-socket-listen, and setuidgid. By the point that sh is started, the socket is open and bound to the smtp port, and the process no longer has superuser privileges.

s6, s6-networking, and execline

Laurent Bercot's s6 and s6-networking packages were designed to do this in conjunction. The commands are structurally very similar to those of daemontools and UCSPI-TCP.

run scripts would be much the same, except for the substitution of s6-tcpserver for tcpserver and s6-setuidgid for setuidgid. However, one might also choose to make use of M. Bercot's execline toolset at the same time.

Here's an example of an FTP service, lightly modified from Wayne Marshall's original, that uses execline, s6, s6-networking, and the FTP server program from publicfile:

#!/command/execlineb -PW
multisubstitute {
    define CONLIMIT 41
    define FTP_ARCHIVE "/var/public/ftp"
}
fdmove -c 2 1
s6-envuidgid pubftp 
s6-softlimit -o25 -d250000 
s6-tcpserver -vDRH -l0 -b50 -c ${CONLIMIT} -B '220 Features: a p .' 0 21 
ftpd ${FTP_ARCHIVE}

ipsvd

Gerrit Pape's ipsvd is another toolset that runs along the same lines as ucspi-tcp and s6-networking. The tools are chpst and tcpsvd this time, but they do the same thing, and the high risk code that does the reading, processing, and writing of things sent over the network by untrusted clients is still in a separate program.

Here's M. Pape's example of running fnord in a run script:

#!/bin/sh
exec 2>&1
cd /public/10.0.5.4
exec \
chpst -m300000 -Uwwwuser \
tcpsvd -v 10.0.5.4 443 sslio -v -unobody -//etc/fnord/jail -C./cert.pem \
fnord

systemd

systemd, the new service supervision and init system that can be found in some Linux distributions, is intended to do what inetd can do. However, it doesn't use a suite of small self-contained programs. One has to audit systemd in its entirety, unfortunately.

With systemd one creates configuration files to define a socket that systemd listens on, and a service that systemd starts. The service "unit" file has settings that allow one a great deal of control over the service process, including what user it runs as.

With that user set to be a non-superuser, systemd does all of the work of opening the socket, binding it to a port, and calling listen() (and, if required, accept()) in process #1 as the superuser, and the service process that it spawns runs without superuser privileges.

3
  • 3
    Thanks for the compliment. This is a great collection of concrete advice. +1. Commented Apr 23, 2014 at 23:13
  • 1
    so much reading... I just want to serve static files
    – user383438
    Commented Sep 26, 2021 at 10:45
  • Maybe you could add the phrase socket activation? I think that is the common terminology for referring to this systemd feature. Commented Nov 21, 2023 at 13:58
6

Your instincts are entirely correct: it's a bad idea to have a large complex program run as root, because their complexity makes them hard to trust.

But, it's also a bad idea to allow regular users to bind to privileged ports, because such ports usually represent important system services.

The standard approach to resolving this apparent contradiction is privilege separation. The basic idea is to separate your program into two (or more) parts, each of which does a well-defined piece of the overall application, and which communicate by simple limited interfaces.

In the example you give, you want to separate your program into two pieces. One that runs as root and opens and binds to the privileged socket, and then hands it off somehow to the other part, which runs as a regular user.

These two main ways to achieve this separation.

  1. A single program that starts as root. The very first thing it does is create the necessary socket, in as simple and limited a way as possible. Then, it drops privileges, that is, it converts itself into a regular user mode process, and does all other work. Dropping privileges correctly is tricky, so please take the time to study the right way to do it.

  2. A pair of programs that communicate over a socket pair created by a parent process. A non-privileged driver program receives initial arguments and perhaps does some basic argument validation. It creates pair of connected sockets via socketpair(), and then forks and execs two other programs that will do the real work, and communicate via the socket pair. One of these is privileged and will create the server socket, and any other privileged operations, and the other will do the more complex and therefore less trustworthy application execution.

5
  • 1
    What your proposing isn't considered best practice. You might look at inetd, which can listen on a privileged socket and then hand that socket of to an unprivileged program. Commented Feb 2, 2014 at 9:59
  • Probably good advice if you are designing the program. If you just want to run a program that accepts a port as an argument, what would you do then?
    – jontejj
    Commented Jan 21, 2021 at 21:34
  • @jontejj Just to make sure I'm clear, you're talking about a program that accepts a port number to listen on via the command line? I'd start by seeing if there was any way to use a non privileged port, to avoid needing root privs. There might be a way to use linux capability tools to assign just the right to open privilege ports when you run the program. Commented Jan 22, 2021 at 3:55
  • auth-bind seem to be the way to go for one-offs?
    – jontejj
    Commented Jan 22, 2021 at 5:34
  • @jontejj I'm not familiar with it so I can't say. Commented Jan 22, 2021 at 7:27
4

If you are running systemd and linux, then you can simply add to the server unit file:

# /etc/systemd/system/http_server.service
# ...
[Service]
# ...
AmbientCapabilities = CAP_NET_BIND_SERVICE

And, if, in addition, you want your web server to never gain additional capabilities, you may also add:

CapabilityBoundingSet = CAP_NET_BIND_SERVICE

Also see

for a description of those systemd service unit file configuration options, which define the execution environment of spawned processes.

2

What is the simplest thing that could possibly work?

A reverse proxy. Nginx is simpler than iptables (for me anyways). Nginx also offers "ssl termination".

sudo apt install nginx
sudo service nginx start
# Verify it's working
curl http://localhost
# make certs
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/cert.key -out /etc/nginx/cert.crt
sudo nano /etc/nginx/conf.d/devserver.conf 

add this content

server {
    listen 80;
    return 301 https://$host$request_uri;
}

server {

    listen 443;
    server_name www.example.com;

    ssl_certificate           /etc/nginx/cert.crt;
    ssl_certificate_key       /etc/nginx/cert.key;

    ssl on;

    location / {

      proxy_set_header        Host $host;
      proxy_set_header        X-Real-IP $remote_addr;
      proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header        X-Forwarded-Proto $scheme;

      proxy_pass          http://localhost:8080;
    }
}

The restart the server:

sudo service nginx restart

Configure DNS: A record for www.example.com -> 127.0.0.1

# Test it out:
curl --insecure --verbose https://www.example.com

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .