0

I am trying to use NGINX as a load balancer for an Elastic Cloud Enterprise platform which consists of 3 VMs. The intention is to load balance a query e.g. http://xxxxxxx.ece.dev.org:9200 where xxxxxx refers to the Elasticsearch cluster and is variable depending on the elasticsearch cluster.

I have a wildcard DNS record which directs *.ece.dev.org to the NGINX IP e.g. *.ece.dev.org -> 10.1.2.99

Then in the conf file, the server is defined as:

upstream ece-proxy{ 
server 10.1.2.3:9200; #these are the servers which host the platform 
server 10.1.2.4:9200; 
server 10.1.2.5:9200; 
} 

server{ 
listen 80; 
server_name *.ece.dev.org; 
location / { 
proxy_pass http://ece-proxy; 
proxy_http_version 1.1; 
proxy_set_header Host $host; 
} 
} 

When I used the nginx configuration above, the URL http://xxxxxxxxxx.ece.dev.org:9200 was not able to connect.

If the wildcard DNS record directly resolves to one of the cloud VMs (e.g. *.ece.dev.org -> 10.1.2.3), then the URL is able to connect successfully.

(Directly entering 10.1.2.3:9200 or 10.1.2.4:9200 etc does not return anything. A cluster ID must be provided)

How should NGINX be configured to handle wildcard sub domains for load balancing?

2 Answers 2

0

We use the following for server_name:

server_name ~^(.*)\.ece\.dev\.org$ ;

1
  • 1
    Adding some explanation to what this string does will make this a better answer than it is. Commented Aug 16, 2019 at 3:33
0

I myself haven't used NGINIX, but I could see you have made the NGNIX server to listen on port 80. But, you are sending the request to port 9200.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .