Je ne sais pas dire NON

Nous sommes tous confrontés à un moment ou à un autre à la difficulté à dire : « non, je ne peux pas ! ». Les exemples ne manquent pas et j’en rencontre chaque jour auprès de mes clients, comme si…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Load Balancer Service type for Kubernetes

External IP

Load Balancing means to distribute a set of tasks over a set of resources, Distribution of traffic to make the overall process effectively

Thanks to Ahmet Alp Balkan for the diagrams

Load Balancing is often perceived as a complex technology, yet changing application architectures and the growth of virtualization and cloud are driving requirements for power and flexibility without sacrificing ease-of-use.

LB in K8s

Once you’ve got your application running in Kubernetes, its’ scheduler makes certain that you just have the quantity of desired pods continually running. this suggests that the application pods are created and deleted unexpectedly and you must not depend on a specific pod. However, you should be ready to access your application in a very inevitable manner. And to try to that, Kubernetes provides the only sort of load balancing traffic, specifically a Service.

Service in k8s defines the abstraction of pods and processes to access them and we call them microservices framework.

Let us take an example that you are running an application a backend which has 4 pods, The 4 pods you are using are exchangeable. Frontend does not care about what backend they are using. So the point here is whenever the pod's changes at the backend, The FE clients must not know or are aware of that and don't keep track

External IPS

The explanation is understandable for most people. The most important thing here is to be sure which IP is used to reach the Kubernetes cluster. To connect to the Kubernetes cluster we can always bind the service to the IP using external IP service type

Architecture

You can see that in the above architecture both the cluster nodes have their IP. The IP address 10.240.0.2 on Node 1 is bind to httpd service where the actual pod resides in Node 2 and the IP address 10.240.0.3is bind to Nginx service in that the actual pod resides in Node 1. The underlying Overlay network makes this possible. When we curl IP address 10.240.0.2, we should see the response from httpd service while curl 10.240.0.3, we should respond from Nginx service.

The advantage of using External IP is:

The disadvantage of External IP is:

Again, we will use the same diagram as our reference for our cluster setup, except with different IP and different hostname. This is not a good real-life example, but it’s easy to distinguish which is which when we’re verifying the setup. SO in live use cases where you are wanting to expose database on one EX IP and the other application of second external IP

I have provisioned 2 VMs for this scenario k3s-external-ip-master will be our Kubernetes master node and has an IP of 10.240.0.2. k3s-external-ip-worker will be Kubernetes worker and has IP of 10.240.0.3

Exposing an External IP Address to Access an Application in a Cluster

Here you are going to install Kubernetes cluster on the master node and worker node will join the cluster

You should be seeing something like this now

We will create Nginx deployment and httpd deployment.

You should be seeing this now

Let’s expose the Nginx deployment

And expose httpd deployment

Kubectl them

Now your Kubernetes services should look like this

You might see the service type is ClusterIP here. I am not sure why it does not says External IP instead.

So we can check the output by using curl and we must get the apache default page

Next, let us curl Nginx service and you should see Nginx default page response.

service/load-balancer-example.yaml

Now run the command

2. Display information about your ReplicaSet objects:

3. Create a Service object that exposes the deployment:

kubectl expose deployment hello-world --type=LoadBalancer --name=Ex-service

4. Display information about the Service:

The output is similar to this:

5. Display detailed information about the Service:

The output is similar to this:

Make a note of the external IP address (LoadBalancer Ingress) exposed by. your service. In this example, the external IP address is 10.88.55.7. Don't forget to find the value of NodePort and port In this example, the is 8080 and the Nodeport is 32377.

In the preceding output, you can see that the service has several endpoints: 10.0.0.4:8080,10.0.0.5:8080,10.0.0.6:8080 + 2 more.These are the IP addresses used internally by the pods to run Hello World To verify these are pod addresses, enter this command:

The output is similar to this:

Use the external IP address (LoadBalancer Ingress) to access the Hello World application:

The response to a successful request is a hello message:

After you create the service, it takes time for the cloud infrastructure to create the load balancer and gets the IP address in the service

After the Kubernetes cluster is running but still external IP is on pending state

So here is the twist while using a Loadbalancer, If the K8s is running in a cluster which does not support LoadBalancer type service, The Loadbalancer won't be provisioned there and it will continue to treat itself like a NodePort service

In that case, if you manage to add or EIP or VIP to your node then you can attach to the EXTERNAL-IP of your TYPE=LoadBalancer in k8 cluster, for example attaching the EIP/VIP address to the node 172.10.2.10

Well Done

If this post was helpful, please click the clap 👏 button below a few times to show your support! ⬇

Add a comment

Related posts:

Why Carbo Culture Exists

Us two stubborn founders met at Singularity Uni’s flagship program in 2013. Chris and I were both sure that if climate change is not dealt with, not a lot of other things are going to matter either…