Container orchestration is a commonly heard term in today’s IT landscape. Organizations are making the move from traditional ways of using virtual machines to opting containers for deployment. Docker has changed the way how data centers operate and have set the standard for Microservices to take control of the industry. While Docker being one of the orchestration solutions, the other competing offering is Kubernetes from Google. Kubernetes is an open-source orchestration platform that helps in deployment, scaling/descaling and load balancing of the containers.
To achieve an orchestration, it’s important to wisely choose the networking model or the cluster. In this article, we will cover in detail the Kubernetes Networking model. Before jumping into the networking types and their implementations, let’s take a look at some basic concepts.
Like cells are for living organs, pods are for Kubernetes. Pods are the building blocks of the Kubernetes application. A pod has either one or more containers within, and they run on the same host and same network configuration and share common resources.
Nodes are the worker machines in the Kubernetes cluster, like a virtual machine or physical machine. Every node has inbuilt services to run the pods.
Service is an abstraction that defines a logical set of pods where the application is running.
Kubernetes Networking Conditions
The main purpose of Kubernetes platform is to create a flat network structure and simplify the cluster networking process to make it easy for end users. To achieve this, Kubernetes requires the network administrators to set the following network rules and conditions as fundamental requirements –
- All the Pods should be able to communicate with one another without the need of Network Address Translation (NAT)
- All the nodes should be able to communicate with all the pods without the need of NAT
- The IP address of one pod is the same as what is seen by the other pod
These rules help to reduce the challenges in porting the applications from traditional VMs to containers and also reduces the network complexities.
Kubernetes Networking Problems and their Solutions
In order to take full advantage of Kubernetes, let’s take a look at the four distinct networking problems and their solutions-
- Container-to-Container Networking
- Pod-to-Pod Networking
- Pod-to-Service Networking
- Internet-to-Service Networking
The highly-coupled container-to-container networking happens inside a pod. As explained previously, a pod is a group of containers within the Service with the same IP address and port number assigned by the Pod’s network namespace. The communication between two containers happens via a localhost connection since they reside inside the same namespace.
For example, if the network IP address is 188.8.131.52 and there are two containers A & B. In this case, both the containers will communicate through the IP address along with the specified port number (say, 50 and 75) that is assigned by the network namespace as shown in the below image.
In this type, we will take a look at how two pods can communicate with one another across two different nodes. Every node has a designated and unique range of IP addresses (Classless Inter-Domain Routing (CIDR) block) for the pod IPs. Every pod has a dedicated IP address that is seen by the other pods and this IP address does not conflict with the pods on the other nodes.
To understand pod-to-pod networking, let’s take a scenario in which two pods reside on the same machine and share a common node. Every pod resides inside their own Ethernet namespace which then communicates with the other namespaces on the same node.
Linux offers a mechanism to connect namespaces across different nodes using the Virtual Ethernet Device (VED) or a veth pair. Each veth pair has two virtual network interfaces – VETH0 and VETH1 that can be spread across multiple namespaces. In order to connect the pods, one end of the veth pair is assigned to the root network namespace while the other is assigned to the pod’s network namespace. The VED acts as the intermediary to establish the connection between the root network namespace and the pod network namespace and share the data between them.
The third networking challenge is the pod-to-service networking. In general, the IP addresses of pods are less durable and in case of an application crash or a machine reboot, the IP address will disappear and a new IP address has to be assigned. This results in the IP address changing without any notice. To overcome this problem, Kubernetes makes use of the concept of Services. Services help to keep a track of the IP addresses that keep changing over a period of time. In this case, even if the pod IP addresses associated with the service keep changing, there will not be any problem connecting to the pods as the client directly connects to the Service’s static virtual IP address.
The previous three challenges and solutions were related to routing the traffic within the Kubernetes cluster. What if you want to expose the application outside the cluster to the external internet? There are two techniques to address these challenges – Ingress and Egress.
Ingress is one of the most robust ways to expose the service to the external world (outside the Kubernetes cluster) and allow access to the Service. In simple terms, ingress is a set of rules that define which connections should reach the Service and which connections should be blocked. Ingress is similar to a load balancer and NodePort that filters request coming from outside the Kubernetes cluster to the service. With ingress, users can also consolidate all the different rules in a single place.
Egress is the process of routing the traffic from the node to outside the Kubernetes cluster. In order for the traffic in the cluster to be made available outside, you can attach an Internet Gateway to the Virtual Private Cloud (VPC).
However, since the IP address of the pod and the IP address of the node that hosts the pod is different, and the translation of IP addresses at the Internet Gateway only works for the VM IP addresses, there is no clue for the NAT on which pods are running in which VMs. As a result, Kubernetes solves this problem using iptables.
For example, a packet is getting transmitted from the pod to the external internet. If the source IP address is from the pod, the Internet Gateway will reject the input as it only understands the IP addresses connected to the VMs. In this case, the iptables will perform the NAT and changes the source of the packet so that it appears to the Internet Gateway that the packet is coming from the VM and not the pod. Then the Internet Gateway will perform another round of NAT (from internal to an external IP). And, the packet can finally reach the internet.
Therefore, managing multiple container applications becomes easier with Kubernetes with the concept of pods and services. The power of Kubernetes networking helps developers to take application programming and development to the next level.
June 24, 2019
Container orchestration is a commonly heard term in today’s IT landscape. Organizations are making the… Continue reading...
July 9, 2019
Kubernetes (K8s) originated as an open source platform to automate the deployment, management and scaling… Continue reading...
May 27, 2019
Microservices is the latest edition in the jargons of software applications. Software applications which had… Continue reading...
May 31, 2019
When industry leaders like Cloud Foundry Diego, Amazon ECS, HashiCorp’s Nomad, and Docker Swarm were… Continue reading...
December 27, 2018
The Anatomy of Continuous Integration Tools Some of the best continuous integration (CI) tools aren’t… Continue reading...
October 25, 2018
Containers have become quite popular in recent years. They help developers maintain consistency across various… Continue reading...
January 31, 2018
You want to scale up your organization, deploy software quickly, and ensure speedy deliveries but… Continue reading...