0

I am trying to wrap my head around AWS' application load balancer. I have a slightly complicated situation where I am running an ec2 instance with a nodejs server that I ultimately only want accessible for https only. So initially I setup a listener on 443 that has a health check on port 443, and forwards to a group that has my ec2 instance as a target.

I have a chef cookbook that automates setting the box up, setting up my nodejs server, and lastly, acquiring / setting up an SSL certificate (using acmetool). In order for that to work, the box needs ports 402, 4402, and 80 open.

So in order to get this working it appeared I needed listeners for 402, 4402, and 80, and then--- that required making individual groups with each of those ports? (this is where I started getting confused).. and then each of those groups have their own health check--- so then I realized, well I can't have my health check on port 443 if I am initially not going to have an SSL certificate... So that caused me to create yet another listener and group on port 12345, and then making my nodejs server respond to the health check on 12345... Which seemed weird because I am basically running two nodejs servers, one in 12345 JUST for a health check, and the other on 443 for my real app.

I had to use 12345 because acmetool has a "redirect" daemon which listens on port 80 and redirects all requests to 443, except for requests it gets for its SSL challenges that it does to issue/renew a cert.

So... my issue is, it seems to me like I have all these groups which all have now an identical health check endpoint on port 12345... Which feels super inefficient, as I can only assume this means the load balancer is going to ping that same endpoint n times, one for each group. Which made me think I must be doing this all wrong.

1
  • I tend to agree you are doing this all wrong. If your health check is a “ping” then ports and certificates don’t even come in to play. There is also no reason you can’t put multiple listeners in one group. But I know nothing about node.js, chef, or acmetool. Why would there need to be inbound ports open for the server to go out and obtain a certificate? Commented Jun 24, 2018 at 5:40

1 Answer 1

0

I'd agree with you that you're doing it wrong. The way I'd go about this is as follows;

  • ALB listening on port 443, terminating SSL traffic.
  • Use Amazon ACM to generate your certificate and attach this to your HTTPS listener.
  • ALB Target Group sending traffic unencrypted to port 80 on the instance.
  • Security group for instances only allows port 80 access from your ALB.

So effectively, your client's request path is:

Client -> [HTTPS] -> ALB -> [HTTP] -> Instance

This achieves a few things:

  1. Prevents you having to manage SSL certificates on individual instances. Your application is still able to determine that the client is encrypted by way of the X-Forwarded-Proto header that the ALB sends to you in its request (and since you'll only have an HTTPS listener, this header's value will always be https.
  2. As the security group is restricting port 80 traffic originating only from the ALB, you can be sure that no one can access your instance's port 80. If your instances don't even have public IP's, then this is even further protection.
  3. Amazon ACM certificates are free, and have the added benefit that Amazon will take care of the private key handling and renewal for you (if you authenticate with DNS, that is). This removes another headache.

The only caveat to this being that your traffic, once it leaves the load balancer destined for your instances, is now unencrypted. This is usually fine unless you have compliance reasons for requiring all traffic to remain in an encrypted state end-to-end.

In which case, it is possible (apparently, I've never done this myself) to use a self-signed certificate on your instances. In which case, you could create a certificate that lasts for a lengthy amount of time (3 years, maybe), and use the Opsworks App's SSL certificate section to upload the parts of it. This will then become available in the App's data bag on the instance, where you can extract the SSL certificate parts from and install it into your application prior to starting the service. Then set your Target Group to route traffic to HTTPS/443 on your instance, and modify health checks and security groups accordingly.

3
  • I already have ssl certs setup with the load balancer, but yes, the problem is much greater than I even described. I am using an s3 bucket as a static website, with hopes to communicate with the ec2 instance as an api server, and so I have that ec2 instance accessible as a subdomain... but that ec2 instance also needs to be running a websockets server over ssl.. And I am thinking this is going to be impossible to do because the websocket server has to use the bare domain, so I need a bare domain SSL cert, and it can't be self-signed.
    – patrick
    Commented Jun 24, 2018 at 14:50
  • And because my bare domain is a static s3 website, it makes it impossible for me to use a tool like acme (letsencrypt) because I can't "prove" that I control the bare domain, due to it being a static s3 bucket.. so I guess I need to give up on using s3, and just use the ec2 instance for everything.
    – patrick
    Commented Jun 24, 2018 at 14:51
  • actually I figured out a way to make a redirect rule in my s3 bucket and send it to my api server, so acme can actually verify and give me a bare domain cert... However, I need websockets, and it appears it's impossible to have my api server have a websocket server set on the bare domain, which makes everything I've done a waste and so I have to start all over.
    – patrick
    Commented Jun 24, 2018 at 22:48

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .