Getting the source IP behind the LoadBalancer
Usage of the Public Cloud Load Balancer with Managed Kubernetes Service (MKS) is now in General Availability.
However this LoadBalancer (based on Octavia project) is not the default one yet for clusters running Kubernetes versions <1.31. For those clusters, you must use the annotation loadbalancer.ovhcloud.com/class: octavia to deploy an Octavia LoadBalancer from your MKS cluster.
Before you begin
This tutorial presupposes that you already have a working OVHcloud Managed Kubernetes cluster, and you have deployed there an application using the OVHcloud Managed Kubernetes LoadBalancer. If you want to know more on those topics, please look at the using the OVHcloud Managed Kubernetes LoadBalancer documentation.
When a LoadBalancer Service resource is created inside a Managed Kubernetes cluster, an associated Public Cloud Load Balancer is automatically created, allowing public access to your K8S application. The Public Cloud Load Balancer service is hourly charged and will appear in your Public Cloud project. For more information, please refer to the following documentation: Network Load Balancer price
The problem
When you deploy your HTTP services in NodePort mode, you directly recover the request's Remote Address from the server (for example using $_SERVER['REMOTE_ADDR'] on PHP or $ENV{'REMOTE_ADDR'} in Perl). This address (usually in IP:port format) corresponds to the original requestor or the last proxy between them and your cluster.
When deploying the services in LoadBalancer mode, things are a bit different, our Load Balancer acts like a proxy, and the Remote Address will give you the IP address of the Load Balancer. How can you get the source IP of the request in this case?
This tutorial describes how to deploy a LoadBalancer service on OVHcloud Managed Kubernetes and preserve the source IP.
Getting the request's source IP behind the LoadBalancer
The easiest way to deploy services behind the Load Balancer while keeping the source IP is to place your services under an Ingress, itself behind the LoadBalancer.
The Ingress is exposed to the outside of the cluster either via LoadBalancer, and it routes incoming traffic to your services according to configured rules. And additional advantage of this setting is the cost: you can have lots of services behind a single LoadBalancer.
In this tutorial we are using the most basic Ingress Controller: NGINX Ingress Controller, where an NGINX server take the role of reverse proxy.
1. Installing the NGINX Ingress Controller
We can deploy the official NGINX Ingress Controller with the manifest file or with the Helm chart.
Please choose one way or the other and follow the corresponding paragraph.
Installing with the manifest file
It creates the namespace, serviceaccount, role and all the other Kubernetes objects needed for the Ingress Controller, and then it deploys the controller:
Installing with the Helm chart
It creates the namespace, serviceaccount, role and all the other Kubernetes objects needed for the Ingress Controller, and then it deploys the controller:
Check your deployment
You can use kubectl to get the state of the service and recover the Load Balancer's IP:
You should see your newly created Ingress service:
As the LoadBalancer creation is asynchronous, and the provisioning of the Load Balancer can take several minutes, you can get a <pending> at EXTERNAL-IP while the Load Balancer is setting up. In this case, please wait some minutes and try again.
2. Patching the Ingress Controller
Now you need to patch the Ingress controller to support the proxy protocol.
Depends on your Kubernetes cluster is working with private network or not, the proxy protocol configuration differs. Follow the tutorial parts according to your setup.
a. [PUBLIC NETWORK ONLY] Get the list of the egress load balancer IPs
You should see something like this:
b. [PRIVATE NETWORK ONLY] Get the list of the egress load balancer IPs
When your Managed Kubernetes cluster is attached to a vRack, load balancers will take two random IP addresses each. Your egress IP list is your subnet range.
For the rest of this documentation, we consider our subnet uses the 10.0.0.0/20 range. Don't forget to replace it with your own!
Patching methods
We can update the NGINX Ingress Controller configuration with manifest files or with Helm. Please choose one way or the other and follow the corresponding paragraph.
Patching with manifest files
Copy the next YAML snippet in a patch-ingress-controller-service.yml file:
And apply it in your cluster:
Copy the next YAML snippet in a patch-ingress-controller-configmap.yml file and modify the proxy-real-ip-cidr parameter according to your cluster configuration:
a. [PUBLIC NETWORK ONLY]
b. [PRIVATE NETWORK ONLY]
Note: 10.0.0.0/20 must be replaced with your own subnet range.
And apply it in your cluster:
You should see the configuration being patched and the controller pod deleted (and recreated):
Patching with Helm
Copy the next YAML snippet in a values.yaml file and modify the proxy-real-ip-cidr parameter according to your cluster configuration:
a. [PUBLIC NETWORK ONLY]
b. [PRIVATE NETWORK ONLY]
Note: 10.0.0.0/20 must be replaced with your own subnet range.
And upgrade your Helm release:
You should see your Helm release being upgraded:
3. Testing
Due to DNS propagation the actual resolving of your Load Balancer FQDN can take an additional 2-5 minutes to be fully usable. In the meantime, you can use the included IP to access the load balancer.
The domain name generated for the service displayed in the EXTERNAL-IP fields is for cluster internal usage only. It should not be used to access the service from internet.
We can now deploy a simple echo service to verify that everything is working. The service will use the mendhak/http-https-echo image, a very useful HTTPS echo Docker container for web debugging.
First, copy the next manifest to a echo.yaml file:
And deploy it on your cluster:
Now you can test it using the LoadBalancer URL:
And you should get the HTTP parameters of your request, including the right source IP in the x-real-ip header:
What if I want to use another Ingress Controller
The precedent method should work in a similar way for any Ingress Controller. We will soon update this tutorial with more detailed information on other Ingress Controllers, specifically Traefik.
Go further
- If you need training or technical assistance to implement our solutions, contact your sales representative or click on this link to get a quote and ask our Professional Services experts for assisting you on your specific use case of your project.
Join our community of users.