2

I am trying to use DigitalOcean VPS as a openVPN server to access services (e.g. nextcloud) hosted on my home network through subdomains (e.g. nextcloud.example.com).

I have set up the following:

  • [working] kylemanna/openvpn docker on Digital Ocean VPS
  • [working] Connected my home pfSense router as VPN client to Digital Ocean VPS
  • [working] Set up the nextcloud service on my home network
  • [working] When connected to the VPN, I can ping between devices and also access the nextcloud service through the internal IP
  • [Not working] jwilder/nginx-proxy to route nextcloud.example.com through the Docker VPN tunnel to nextcloud's internal IP

I have tried to add a virtual_host file for nextcloud.example.com to nginx-proxy routing request to the openvpn port 3000, and then within the openvpn container using iptables to forward all request on port 3000 to the internal nextcloud IP.

Would really appreciate any help as I am to be honest a bit in the deep end here?

kylemanna/openvpn - iptables config forwarding

user@Debianwebhost:~$ docker exec -it vpn bash
bash-4.4# iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DNAT       tcp  --  anywhere             anywhere             tcp dpt:3000 to:192.168.0.99:80
DNAT       udp  --  anywhere             anywhere             udp dpt:3000 to:192.168.0.99:80
DNAT       udp  --  anywhere             anywhere             udp dpt:3000 to:192.168.0.99:80

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
SNAT       tcp  --  anywhere             192.168.0.99         tcp dpt:http to:172.17.0.2:3000
SNAT       udp  --  anywhere             192.168.0.99         udp dpt:http to:172.17.0.2:3000

nginx-proxy virtual host config

user@Debianwebhost:/etc/nginx/vhost.d$ cat nextcloud.example.com
server {
        server_name _; # This is just an invalid value which will never trigger on a real hostname.
        listen 80;
        access_log /var/log/nginx/access.log vhost;
        return 503;
}
# nextcloud.example.com
upstream nextcloud.example.com {
                                ## Can be connect with "bridge" network
                        # vpn
                        server 172.17.0.2:3000;
}
server {
        server_name nextcloud.example.com;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        location / {
                proxy_pass http://nextcloud.example.com;
        }
}

nginx-proxy nginx.conf

user@Debianwebhost:/etc/nginx$ cat nginx.conf

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

nginx-proxy default.conf

user@Debianwebhost:/etc/nginx/conf.d$ cat default.conf
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
  default $http_x_forwarded_proto;
  ''      $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
  default $http_x_forwarded_port;
  ''      $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
  default upgrade;
  '' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
  default off;
  https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent"';
access_log off;
resolver [hidden ips, but there are 2 of them];
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
        server_name _; # This is just an invalid value which will never trigger on a real hostname.
        listen 80;
        access_log /var/log/nginx/access.log vhost;
        return 503;
}
# nextcloud.example.com
upstream nextcloud.example.com {
                                ## Can be connect with "bridge" network
                        # vpn
                        server 172.17.0.2:3000;
}
server {
        server_name nextcloud.example.com;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        location / {
                proxy_pass http://nextcloud.example.com;
        }
}

1 Answer 1

1

I found a solution to this, basically I had to add an

ip route add 192.168.0.0/24 via 172.19.0.50 

To tell the VPS that any request to 192.168.0.99 (my internal network) need to be routed through the docker container with the VPN server (172.19.0.50).

Once the request got into the VPN server docker, then it know what to do with it as I had already specified the following in pfSense client (/etc/openvpn/ccd/client) to make the VPN aware that any request to these IPs should go through this client:

iroute 192.168.0.0 255.255.255.0

On top of that, I also had to specify the following in the openVPN config (/etc/openvpn/openvpn.conf)

### Route Configurations Below
route 192.168.254.0 255.255.255.0
route 192.168.0.0 255.255.255.0

### Push Configurations Below
push "route 192.168.0.0 255.255.255.0"

Then of course open up any needed firewalls.

2
  • Hmm, I am trying to understand why did you have to add the 192.168.254.0 route to your server config? If your server is at 192.168.255.0 why would you need to route 192.168.254.0 to your tun0 interface on the server?
    – user10607
    Commented Jun 23, 2019 at 11:04
  • Sorry for the delay to answer this, I believe it was because I had set up a static ip for the openvpn host. Hence it was on the 192.168.254.0 network, and not on the default 192.168.255.0 network. So I had to add it in to allow the requests to be properly routed between the two networks within openvpn.
    – Svarto
    Commented Jul 10, 2019 at 17:23

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .