0

I currently have the following nginx file

server_tokens off;
upstream backend {
    server unix:/home/www/api.to/app.sock weight=1;
    #server 1.2.3.4 weight=1;
    #server api3.example.com weight=1;

}

server {
    server_name api.example.com;
    client_max_body_size 10000m;
    gzip_disable "msie6";
    access_log off;
    error_log on;
    gzip_vary on;
    gzip on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_min_length 256;
    gzip_types application/javascript application/font-ttf ttf text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;

    location /word/report.html {
       alias /var/log/nginx/report.html;
    }
    location / {

        proxy_pass http://backend;
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Scheme $scheme;
        proxy_set_header REMOTE_ADDR $remote_addr;
        proxy_read_timeout 1000; # this

    }


    location /static/downloads {
        alias /home/www/api.to/static/downloads/;
   add_header Content-disposition "attachment; filename=$1";
   default_type application/octet-stream;
  }





    location /static/ {
        alias /home/www/api.to/static/;
        expires 35d;
        add_header Pragma public;
        add_header Cache-Control "public, must-revalidate, proxy-revalidate";
        access_log off;
    }



    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/api.example.com-0001/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/api.example.com-0001/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot


}

When I make a post request and I only let the upstream go to the local unix sock, it works fine.

When I change the upstream to either 1.2.3.4 or api3.example.com, it returns a CORS issue.

The same app is both systems (api3.example.com being the 1.2.3.4 server ip).

If I send from my local a post request to api3.example.com it responds and functions. If I send from local a post request to 1.2.3.4 it also responds and functions.

I rolled my own custom CORS middleware, which is as follows, although I do not think it plays a part, since it works fine from other sources, unless in upstream.

middleware.py place

from django import http


class CorsMiddleware(object):
    def __init__(self, get_response):
        self.get_response = get_response

    def __call__(self, request):
        response = self.get_response(request)
        if (request.method == "OPTIONS"  and "HTTP_ACCESS_CONTROL_REQUEST_METHOD" in request.META):
            response = http.HttpResponse()
            response["Content-Length"] = "0"
            response["Access-Control-Max-Age"] = 86400
        response["Access-Control-Allow-Origin"] = "*"
        response["Access-Control-Allow-Methods"] = "DELETE, GET, OPTIONS, PATCH, POST, PUT"
        response["Access-Control-Allow-Headers"] = "cache-control, accept, accept-encoding, authorization, content-type, dnt, origin, user-agent, x-csrftoken, x-requested-with"
        return response

Given all this the CORS errors that show up are shown in the screenshot below. But they generally say Response body is not available to scripts (Reason: CORS Missing Allow Origin) as well as Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://api.example.to/new/pdf-word/. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 404.

As well as CORS Missing allow origin and NS_ERROR_DOM_BAD_URI

other errors Errors

I have spent quiet a bit of time trying to debug this, and still no avail. The version of nginx/1.22.1

0

1 Answer 1

0

The only way I was able to get this was sending the upstream to another port on the receiving server.

ie

upstream backend {
    server unix:/home/www/api.to/app.sock weight=1;
    server 1.2.3.4:80000 weight=1;
}

And in the recieving nginx file put a

listen 8000;

next to the listen 80/443 (respectively one you use), or as I just realized... send the upstream to the 443. This was all caused because I was sending traffic via the upstream to the IP, but that sends it to port 80, my app was on ssl port, 443. Hence why it works when listening and sending to port 8000. 🤦

Not the answer you're looking for? Browse other questions tagged .