Kubernetes for the absolute beginner - Part III
In this article, we continue with some more concepts, this time, related to networking.
What are kubernetes Ingress and Egress?
Recall that for all intents and purposes, an external user or application interacts with our Kubernetes pod as if it were a real server. That means we need to define security rules for what traffic is allowed into and out of our "server", just as we would for any other server that hosts an application(s). This incoming traffic to our Kubernetes pods is called ingress, and outbound traffic from our pods to the world is called egress. You create ingress policies and egress policies mainly to restrict unwanted traffic into and out of your services. And these policies are also where to define the ports which your pod will use to accept incoming and transmit outgoing data/ traffic. Read more about how to define ingress policies here.
What is an Ingress Controller?
But before you can define ingress and egress policies, you must first start the component known as an ingress controller; it is not started by default in your cluster. There are different types of ingress controllers, and the Kubernetes project by default supports only the Google Cloud and the Nginx ingress controllers out of the box. If you require additional or different controllers, such as Amazon's AWS controller, you set these up according to the rules and instructions of whichever environment you are in. You can also start multiple ingress controllers according to your cluster's needs.
What are Replica and ReplicaSet?
For the sake of resiliency, it is always a good idea to create multiple copies of pods on different nodes. These are called replicas. Let's say one of your desired state policies is 'Always maintain 3 copies of the pod named webserver-1'. This means your replication controller or ReplicaSet will monitor the number of active replicas of that pod, and if one is unavailable for any reason (such as the node it's hosted in goes down), then a new one will be automatically created by the Deployment Controller (defined next).
The desired state is defined in a deployment. A sub-component of the master node known as the deployment controller is responsible for actually implementing and altering the current state to the desired state. So, for instance, if you currently have 2 replicas of a pod and your desired state says you should have 3, the Replication Controller or ReplicaSet will automatically detect this and instruct the deployment controller to deploy a new pod #3 according to the predefined settings.
What is Service Mesh?
In the first post in this series, we also previously defined what microservices are. Closely related to this is the concept of the microservices mesh, also called a Service Mesh. A mesh manages network traffic between microservices. It utilizes the networking setup between your containers to control or change the interactions between different components within your application.
Let's illustrate this service mesh concept with an example:
Consider that you want to test Nginx's new release to check if it's compatible with your web application. You have created a new container (Container2) with the new Nginx version and copied over your current Nginx webserver config from the current container (Container1). But you don't want to affect the other microservices that make up the web application (assuming each container corresponds to a separate microservice) - that is, the MySQL database, the Node.js frontend, the load balancers, etc. So using a service mesh setup, you can instantly change only the webserver microservice to Container2 (the one with the new Nginx version) for testing. And if you determine it does not work, say because it causes some compatibility issues with your website, then you call on the service mesh to quickly switch back to the original Container1. And all this without making any other configuration changes to any of the other containers - the changes are completely transparent to the other containers. Without a service mesh setup, this would be a tedious task involving changing configuration settings on all other containers, one by one, to point the services they contain from Container1 to Container2, and then after the testing failure, changing them all back. Read more about service mesh in this article.
In this part of our Kubernetes series, we introduced some concepts related to Kubernetes networking. Networking in Kubernetes may be tricky and hard to understand if you are just starting, you may need some practice to understand how all of this really works and move to more advanced use cases.
In the next article, we shall take a look at some more topics surrounding Kubernetes - how do you get started on learning Kubernetes, how to install and test Kubernetes locally, and some great monitoring and security tools for Kubernetes.
Get similar stories in your inbox weekly, for free
Share this story:
Get deep visibility into the performance of your complex enterprise applications and cloud native workloads. Identify potential issues, improve productivity, and ensure that your business and end users are unaffected by downtime and substandard performance ...
We tested ManageEngine Applications Manager to monitor different Kubernetes clusters. This post shares our review …
Harness the power of artificial intelligence (AI) and machine learning (ML) to monitor your IT resources with Site24x7's artificial intelligence for IT operations (AIOps) and machine learning operations (MLOps). Improve mean time to repair (MTTR) issues with the help of Site24x7 AIOps ...
In this post we'll dive deep into integrating AIOps in your business suing Site24x7 to …