Kubernetes: Building a Custom Scheduler

in Kubernetes

Copy of TheChiefIOPost.png

Kubernetes is built around several components with the goal of orchestrating distributed systems, like its scheduler. The latter is one of the orchestration pillars ensuring that each pod is assigned to a node to run on. Kubernetes is by far the most popular container management system not by chance but because it is highly configurable and extensible. In this context, you can build your own custom scheduler.

    Generally, the scheduler is responsible for mapping persistent volumes to the pods in the cluster that has been previously scheduled. It is built to be operational with an open-source plugin called Kube-scheduler which is a significant part of the control plane and the core of how the scheduler operates. In running applications with special needs like it’s the case for heavy traffic applications, creating a custom Kubernetes scheduler can ensure the resources to be managed by the container are handled in real-time so Kubernetes operations are optimized. It is also necessary to understand that the scheduler is dependent on some behaviors; policies and profiles as they help properly configure the functionalities of the scheduler.

    By definition, the custom Kubernetes scheduler is designed to watch out for new pods in the system and ensures they are assigned to the best nodes that they could run on. Its operation is largely done by the plugin and its decision of nodes is influenced by the policies and profiles sectioned for the schedules. The operation of the custom Kubernetes scheduler is successful with two major operations; Filtering and Scoring. While Filtering involves looking into the available nodes and confirming its ability to meet the requirements of the newly created pods to be scheduled, Scoring is the next step it takes and it involves assigning scores to each node that have resulted from the filtering process and then the pods will be assigned to the node with the highest scores. In the case that there is some form of ties in the scores between one or more nodes, then the winning node is selected at random.

    Furthermore, the filtering and scoring functions of the scheduler are being configured based on policies and profiles which cover priorities for scoring and plugins configurations stages respectively. Some of those stages include Score, Bind, Filter, and many others. Also, the scheduler performance can be tuned to suit the business needs especially for relatively large clusters, and the scheduler’s behavior to strike a balance between latency and accuracy which involves how pods are quickly placed as well as ensuring the scheduler does not make poor decisions.

    There is a concept of extensibility of Kubernetes and it is performed by using Kubernetes extensions. They are components that extend and integrate with Kubernetes for long term operations. The scheduling framework has multiple extension points and asides filtering and scoring functions, other points are correlated to functions like :

    • Queue sorting
    • Reserving
    • Permitting (approve, deny, wait)
    • Pre-binding / Binding / Post-binding

    The following image taken from the official documentation shows all the extension points:

    Scheduling framework extension points Scheduling framework extension points

    By way of conclusion, the custom Kubernetes scheduler is used to orchestrate and manage applications with specific needs like high usage and further ensures that there are no downtimes while the pods are rightly assigned and functional without any input from the developers except configuring the schedulers to operate on its own.

    Furthermore, if you are using Kubernetes v1.18 or later, you can configure a set of plugins as a scheduler profile and then define multiple profiles to fit various kinds of workload especially in an organization with various running services.

    Additional resources:

    A Custom Kubernetes Scheduler to Orchestrate Highly Available Applications

    Writing custom Kubernetes schedulers · Banzai Cloud

    Get similar stories in your inbox weekly, for free

    Share this story with your friends
    The Chief I/O

    The team behind this website. We help IT leaders, decision-makers and IT professionals understand topics like Distributed Computing, AIOps & Cloud Native

    Latest stories

    DevOps and Downed Systems: How to Prepare

    Downed systems can cost thousands of dollars in immediate losses and more in reputation damage …

    Cloud: AWS Improves the Trigger Functions for Amazon SQS

    The improved AWS feature allows users to trigger Lambda functions from an SQS queue.

    Google Takes Security up a Notch for CI/CD With ClusterFuzzLite

    Google makes fuzzing easier and faster with ClusterFuzzLite

    HashiCorp Announces Vault 1.9

    Vault 1.9 released into general availability with new features

    Azure Container Apps: This Is What You Need to Know

    HTTP-based autoscaling and scale to zero capability on a serverless platform