30

I'm trying to set up nginx as a reverse proxy, with a large number of backend servers. I'd like to start up the backends on-demand (on the first request that comes in), so I have a control process (controlled by HTTP requests) which starts up the backend depending on the request it receives.

My problem is configuring nginx to do it. Here's what I have so far:

server {
    listen 80;
    server_name $DOMAINS;

    location / {
        # redirect to named location
        #error_page 418 = @backend;
        #return 418; # doesn't work - error_page doesn't work after redirect

        try_files /nonexisting-file @backend;
    }

    location @backend {
        proxy_pass http://$BACKEND-IP;
        error_page 502 @handle_502; # Backend server down? Try to start it
    }

    location @handle_502 { # What to do when the backend server is not up
        # Ping our control server to start the backend
        proxy_pass http://127.0.0.1:82;
        # Look at the status codes returned from control server
        proxy_intercept_errors on;
        # Fallback to error page if control server is down
        error_page 502 /fatal_error.html;
        # Fallback to error page if control server ran into an error
        error_page 503 /fatal_error.html;
        # Control server started backend successfully, retry the backend
        # Let's use HTTP 451 to communicate a successful backend startup
        error_page 451 @backend;
    }

    location = /fatal_error.html {
        # Error page shown when control server is down too
        root /home/nginx/www;
        internal;
    }
}

This doesn't work - nginx seems to ignore any status codes returned from the control server. None of the error_page directives in the @handle_502 location work, and the 451 code gets sent as-is to the client.

I gave up trying to use internal nginx redirection for this, and tried modifying the control server to emit a 307 redirect to the same location (so that the client would retry the same request, but now with the backend server started up). However, now nginx is stupidly overwriting the status code with the one it got from the backend request attempt (502), despite that the control server is sending a "Location" header. I finally got it "working" by changing the error_page line to error_page 502 =307 @handle_502;, thus forcing all control server replies to be sent back to the client with a 307 code. This is very hacky and undesirable, because 1) there is no control over what nginx should do next depending on the control server's response (ideally we only want to retry the backend only if the control server reports success), and 2) not all HTTP clients support HTTP redirects (e.g. curl users and libcurl-using applications need to enable following redirects explicitly).

What's the proper way to get nginx to try to proxy to upstream server A, then B, then A again (ideally, only when B returns a specific status code)?

2 Answers 2

26

Key points:

  • Don't bother with upstream blocks for failover, if pinging one server will bring another one up - there's no way to tell nginx (at least, not the FOSS version) that the first server is up again. nginx will try the servers in order on the first request, but not follow-up requests, despite any backup, weight or fail_timeout settings.
  • You must enable recursive_error_pages when implementing failover using error_page and named locations.
  • Enable proxy_intercept_errors to handle error codes sent from the upstream server.
  • The = syntax (e.g. error_page 502 = @handle_502;) is required to correctly handle error codes in the named location. If = is not used, nginx will use the error code from the previous block.

Here is a summary:

server {
    listen ...;
    server_name $DOMAINS;

    recursive_error_pages on;

    # First, try "Upstream A"
    location / {
        error_page 418 = @backend;
        return 418;
    }

    # Define "Upstream A"
    location @backend {
        proxy_pass http://$IP:81;
        proxy_set_header  X-Real-IP     $remote_addr;
        # Add your proxy_* options here
    }

    # On error, go to "Upstream B"
    error_page 502 @handle_502;

    # Fallback static error page, in case "Upstream B" fails
    root /home/nginx/www;
    location = /_static_error.html {
        internal;
    }

    # Define "Upstream B"
    location @handle_502 { # What to do when the backend server is not up
        proxy_pass ...;
        # Add your proxy_* options here
        proxy_intercept_errors on;          # Look at the error codes returned from "Upstream B"
        error_page 502 /_static_error.html; # Fallback to error page if "Upstream B" is down
        error_page 451 = @backend;          # Try "Upstream A" again
    }
}

Original answer / research log follow:


Here's a better workaround I found, which is an improvement since it doesn't require a client redirect:

upstream aba {
    server $BACKEND-IP;
    server 127.0.0.1:82 backup;
    server $BACKEND-IP  backup;
}

...

location / {
    proxy_pass http://aba;
    proxy_next_upstream error http_502;
}

Then, just get the control server to return 502 on "success" and hope that code is never returned by backends.


Update: nginx keeps marking the first entry in the upstream block as down, so it does not try the servers in order on successive requests. I've tried adding weight=1000000000 fail_timeout=1 to the first entry with no effect. So far I have not found any solution which does not involve a client redirect.


Edit: One more thing I wish I knew - to get the error status from the error_page handler, use this syntax: error_page 502 = @handle_502; - that equals sign will cause nginx to get the error status from the handler.


Edit: And I got it working! In addition to the error_page fix above, all that was needed was enabling recursive_error_pages!

3
  • 1
    For me the proxy_next_upstream did the trick (well my scenario was not as complex as yours), I just wanted nginx to try the next server if an error occurred, thus I had to add proxy_next_upstream error timeout invalid_header non_idempotent; (non_idempotent, because I want to mainly forward POST requests).
    – Philipp
    Commented Jul 20, 2017 at 19:35
  • Could you post your full solution? You mention error_page and recursive_error_page but not in the full context.
    – Marc
    Commented May 16, 2021 at 16:57
  • @Marc OK, added. Commented May 17, 2021 at 4:55
3

You could try something like the following

upstream backend {
    server a.example.net;
    server b.example.net backup;
}

server {
    listen   80;
    server_name www.example.net;

    proxy_next_upstream error timeout http_502;

    location / {
        proxy_pass http://backend;
        proxy_redirect      off;
        proxy_set_header    Host              $host;
        proxy_set_header    X-Real-IP         $remote_addr;
        proxy_set_header    X-Forwarded-for   $remote_addr;
    }

}
5
  • nginx won't retry a.example.net after it failed once on the same request. It will send to the client the error encountered when trying to connect to b.example.net, which isn't going to be what they expected unless I'd implement proxying in the control server as well. Commented Sep 8, 2013 at 12:24
  • And what would be with your config in the next situation: request to the upstream A return fail, upstream B return fail, then we again trying upstream A and also get fail (502)?
    – ALex_hha
    Commented Sep 8, 2013 at 12:56
  • Upstream B is the control server. Its purpose is to make sure that the next request to upstream A will succeed. The goal is to try upstream A, if it failed try upstream B, if it "succeeded" (using our internal convention of "success"), try upstream A again. If my question wasn't clear enough, let me know how I can improve it. Commented Sep 8, 2013 at 13:02
  • Hmm, let's assume upstream A is down, for e.g. some hardware issue. What will make upstream B? Is it capable to return response to the request from client?
    – ALex_hha
    Commented Sep 8, 2013 at 13:10
  • This problem is not about failover for hardware failures. This problem is about starting upstream backends on-demand. If the control server (upstream B) can't reactivate the backend (upstream A), then ideally the user should get an appropriate error message, but it is not the problem I'm trying to solve - the problem is getting nginx to retry A after B again, within the same request. Commented Sep 8, 2013 at 13:18

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .