Kubernetes: How to avoid scheduling pods on certain nodes

Balkrishan Nagpal
5 min readJan 11, 2021

--

When new pods are created in a cluster (either due to failure of existing pods or to scale the system horizontally), these pods are placed on some node. If some existing node has capacity in line with resource requirements for the pod, the pod is scheduled on that node or else new node is created.

Generally, an application will have multiple pods and every pod will have different resource requirements. Depending upon resource requirement of pods, you decide the node size. but if one of your pod has pretty much different resource requirement compared to other pods (eg: database), you might be tempted to have different node configuration for that particular pod and in such scenarios you want to make sure that only certain pods (eg: database) land on that node but no other pod, so that the node has sufficient capacity for target pod. We will see how we can avoid other pods from landing onto these nodes.

We can do this by using taints and tolerations.

Taint and Tolerations

Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes.

Taint

Taints are applied to nodes. They allow a node to repel a set of pods.

Tolerations

Tolerations are applied to pods, and allow (but do not require) the pods to be scheduled onto nodes with matching taints.

Core concept

Idea is, you taint a node on which you want only certain pods to be scheduled and pods which should be scheduled on these nodes should have toleration for the taint which is applied to the node.

Gate security analogy

It is same as if you want to restrict entry to some premise to certain people only, than you put security on the gate of the premise which will not allow anyone to enter the premise. Only the ones who have the required pass to enter the premise will be allowed by security. So, taints are like security on nodes and tolerations are gate pass for the security.

Put security on gate: Apply taint on node

To restrict a node to accept pod of certain types, we need to apply a taint on the node. You can apply the taint using kubectl taint.

kubectl taint nodes <node-name> type=db:NoSchedule

You need to replace the <node-name> place holder with name of node. Above command places a taint on node “<node-name>”. The taint has key “type”, value “db”, and taint effect “NoSchedule”. This means that no pod will be scheduled on node <node-name> unless it has a matching toleration. We will shortly see what is taint effect and what are different types of effects. You can have any value for key and value. In my case, I choose “type” as key and “db” as value.

List taints on a node

You can list taints which are applied on a node using kubectl describe and then applying filter. This will list all the taints applied on specified node. In below command, replace the <node-name> placeholder with actual name of a node in your cluster.

kubectl describe node <node-name> | grep Taints

Give pass to some people: Apply tolerations to pods

In following pod definition, notice tolerations under spec. This toleration is for a taint and hence acts as a gate pass for security (we are referring taint as security). Text in bold in code snippet below is the tolerations. Since it is a list, you can apply multiple tolerations on a pod.

apiVersion: v1
kind: Pod
metadata:
name: <image-name>
labels:
app: taint-test
spec:
containers:
- name: <image-name>
image: <image>
tolerations:
- key: "type"
operator: "Equal"
value: "db"
effect: "NoSchedule"

Important thing to note while applying tolerations is, it should be absolutely identical to taint you are trying to address. Notice in above toleration, key, value and effect are exactly same as mentioned in taint. Equal operator tells controller to match the value for the key.

Once above toleration is applied, this pod can be scheduled on the node with similar taint. So effectively, this pod has the gate pass to get into the node. Any pod which does not have this taint, can’t be scheduled on this node.

It is important to understand that applying toleration on pod means that pod can be scheduled on node with same taint but this does not mean that this pod can’t go to any other node in cluster. This pod can still go to any other node in the cluster but a node with the taint can accept pods with similar toleration.

Before we move further, lets discuss about various taint effects and operators in tolerations.

Taint effects

There are 3 taint effects: NoSchedule, PreferNoSchedule and NoExecute

  • NoSchedule: If there is at least one un-ignored taint with effect NoSchedule, then Kubernetes will not schedule the pod onto that node
  • PreferNoSchedule: If there is at least one un-ignored taint with effect PreferNoSchedule, then Kubernetes will try to not schedule the pod onto the node but if it does not find any other suitable node, it will schedule the pod in that node.
  • NoExecute: If there is at least one un-ignored taint with effect NoExecute, then the pod will not be scheduled on that node as in NoSchedule and along with this if there are some pods which are running on the node without suitable toleration, those will also be evicted. This can happen in case pods are scheduled on node before taint was applied on the node.

Toleration operators

There are 2 operators for tolerance in pods:

  • Equal: This will match both key and value to make sure they both match with the ones specified in taint.
  • Exist: This will make sure that taint with given key exists on node and does not bother about value. Value in the taint can be anything.

Taints and master node/control plane

If you notice, in multi node cluster, pods are not scheduled on master node. How is this controlled? Well you guessed it right, using taints on master node.

Master node has a following taint applied to it:

node-role.kubernetes.io/master

You can check this by describing the node and filtering taint as mentioned above.

and since no pod has a tolerance for this taint, no pod is scheduled on master node. You can schedule pods on master node by removing the taint from the node as describe in following section.

Remove taint from node

To remove the taint added by the command above, you can run:

kubectl taint nodes node1 key1=value1:NoSchedule-

It is exactly the same command which is used to apply taint but followed by “-” at the end.

That’s is all about how can you avoid pods to be scheduled on certain nodes.

Conclusion

Due to some special resources requirements of some pods, you may launch nodes with higher configuration and want to make sure those nodes don’t accept any pod coming its way rather you want to restrict scheduling of certain pods on that node. You do this by applying the taint on the node. A taint on node will restrict any pod from being scheduled on that node unless a pod has a toleration for the taint which is applied on that node. Pods with appropriate toleration can be scheduled in that node.

So, it is a 2 step process:

  1. Apply taint on node
  2. Mention toleration on pod for the taint

I hope this helps.

--

--