3

I am imagining a Kubernetes IPv6 based cluster with Nodes in a private subnet and an Ingress deployed that sets up an application load balancer in a public subnet which receives a public IPv6. You can find this setup recommended for AWS EKS with IPv6 and generally also with other cloud providers.

Now a IPv6 client connects to my IPv6 load balancer which then balances traffic to IPv6 Nodes. Does NAT66 happen at the load balancing step?

I have tried a multitude of different search phrases. I looked at IPv6 prefixes and so forth. With IPv4 you would have NAT in use to translate from public IP to cluster IPs. But when I try to find an explanation for the same just with IPv6 the Internet content leaves me with just the fact that NAT is not needed for IPv6, but not the how is it done instead. Please help me to understand the workings.

Maybe this picture helps (it shows the described route in dual stack form, from which I would consider only the case of IPv6 -> IPv6 -> IPv6): https://aws.github.io/aws-eks-best-practices/networking/ipv6/ipv4-internet-to-eks-ipv6.png

4
  • NAT66 (RFC 6296, EXPERIMENTAL) is one-to-one NAT that explicitly forbids the one-to-many NAT: "Since there is significant detriment caused by modifying transport layer headers and very little, if any, benefit to the use of port mapping in IPv6, NPTv6 Translators that comply with this specification MUST NOT perform port mapping."
    – Ron Maupin
    Commented Apr 19 at 16:43
  • Also, look at anycast.
    – Ron Maupin
    Commented Apr 19 at 17:25
  • Thank you, the hint to anycast really helped me.
    – flipcc
    Commented Apr 23 at 8:45
  • Load balancers usually operate at the application later. They create a new connection to the backend, rather than modifying the original one. Commented May 30 at 11:44

2 Answers 2

1

TLDR;

  1. your load balancer could be performing a NAT-like behavior by terminating the client connection and placing the request into a new IPv6 connection to a backend
  2. in case of geographically dispersed servers IPv6 anycast could be used (of course in conjunction with 1.)
  3. it seems your load balancer could also be performing the discouraged NAT66

Longer verison;

While Peter Green's answer is useful I wanted to add this TLDR (thank you Ron Maupin for mentioning anycast to me) and list the following links that offer graphics and insides from two major cloud providers which I found helpful:

  • load balancing in GCP and explanation of use of anycast as well as IPv6 termination (though here with IPv4 backend, but I was looking for the general principal so it fits)

IPv6 termination enables you to handle IPv6 requests from your users and proxy them over IPv4 to your backends. By using IPv6, you can do the following:

Use a single anycast IPv6 address for multi-region deployment. You only need one load balancer IPv6 address for application instances running across multiple regions. This means that your DNS server has a single AAAA record and that you don't need to load balance among multiple IPv6 addresses. Caching of AAAA records by clients is not an issue because there's only one address to cache. User requests to the IPv6 address are automatically load balanced to the closest healthy backend with available capacity.

IPv6 termination for load balancing

When a user connects to the load balancer through IPv6, the following happens:

  1. Your load balancer, with its IPv6 address and forwarding rule, waits for user connections.

  2. An IPv6 client connects to the load balancer via IPv6.

  3. The load balancer acts as a reverse proxy and terminates the IPv6 client connection. It places the request into an IPv4 connection to a backend.

  4. On the reverse path, the load balancer receives the IPv4 response from the backend, and then places it into the IPv6 connection back to the original client.

Once deployed, an IPv4 or IPv6-enabled Internet client can communicate with the public IPv4 or IPv6 addresses (or hostnames) of the Azure Internet-facing Load Balancer. The load balancer routes the IPv6 packets to the private IPv6 addresses of the VMs using network address translation (NAT). The IPv6 Internet client cannot communicate directly with the IPv6 address of the VMs.

  • a blog post that is really understandable and mentions NAT-like behavior by IPv6 termination as well as mentioning that NAT66 implementations do exist albeit not being standardized

It should also be mentioned that outbound web proxies (like web content filters, a.k.a. Secure Web Gateways (SWGs)) and inbound reverse proxies (like server load balancers) perform a NAT-like behavior. These systems aren’t performing a NAT66 function, per se. They are actually terminating the TCP connection on one interface and establishing a new TCP connection using the other interface. This has the effect of changing the source address as the connection is made through the proxy.

NAT66 Exists

Contrary to popular belief, there are ways to perform NAT with IPv6, and vendors have implemented NAT66 into their products – even touting NAT66 on their product data sheets. Even though the IPv6 purists cringe when NAT66 is mentioned, and the IETF has not formally created an RFC to define how it should function, implementations of NAT66 still exist.

1

In general there are two main approaches to network level load balancing.

The first is to have the load balancer perform translation of the destination address. Return packets must then be passed through a reverse translation before being sent out again.

If only incoming traffic is allowed, then there is no need for the translation process to be stateful, or to modify port numbers. Since there is no risk of port conflicts. If the same IPs are used for both incoming and outgoing traffic things get more complicated, a stateless approach may still be possible, but port based rules are likely to apply different translation rules to outgoing traffic from incoming traffic.

Alternatively the need for stateful translation and/or complex translation rules may be avoided by using separate IP addresses for incoming and outgoing traffic. Then the reverse translations can be applied statelessly on traffic from the "service address" while traffic from the "outgoing address" passes through unmolested.

The second is to have the same address on multiple servers (generally as a loopback IP), the load balancer makes packet forwarding decisions but does not modify the packets. This generally relies on the load balancer being able to forward packets to the servers without passing through an intermediate router.

When the same address is assigned to multiple servers like this, it will normally not be the only address assigned to the server. The servers will have individual addresses assigned for outgoing and administrative traffic.

Either of these approaches is possible with either IPv4 or IPv6. So what does IPv6 change? two things.

  1. IPv6 addresses are in far more plentiful supply than IPv4 ones. It is no longer necessary to use one to many translation for the purpose of saving IPs.
  2. IPV6 NAT in general is discouraged and stateful IPv6 NAT is even more strongly discouraged. That hasn't stopped people from implementing stateful IPv6 NAT through.

From that single slide there is not enough information to determine what exactly AWS are doing.

Not the answer you're looking for? Browse other questions tagged or ask your own question.