Node Affinity

Node Affinity

In this tutorial we will discuss about Node Affinity in Kubernetes. In simple words this allows you to tell Kubernetes to schedule pods only to specific subsets of nodes. Hope I kept it simple as possible to understand.

It is recommended that you should understand the Node Selectors and Taints and Tolerations concepts before proceeding further.

The primary purpose of this feature is to ensure that PODs are hosted on particular nodes. They are 3 ways to place a POD on a specific node,

  1. Node Selectors
  2. Taints and Tolerations
  3. Node Affinity

But each method has its own advantages and disadvantages. So based on the use case we can use any one of the above methods to place POD on a node.

Node Affinity is an advanced feature in Kubernetes when compared to the other two above methods.

With the Node Selectors we cannot provide advanced expressions like OR or NOT with node selectors. This feature provide us advanced capabilities to limit POD placement on specific nodes.

Will now look like this with this feature. Although both Node Selectors and Node affinity does exactly the same thing place the POD on the large node.

Let us look at the it a bit closer. Under spec we have affinity and then nodeEffinity under that. And then we have a property that looks like a sentence called requiredDuringSchedulingIgnoredDuringExecution, then we have the node selector terms that is an array and that is where we will specify the key and value pairs.

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod

spec:
  containers:
    - name: data-processor
      image: data-processor
   affinity:
   nodeAffinity:
     requiredDuringSchedulingIgnoredDuringExecution:
       nodeSelectorTerms:
         - matchExpressions:
           - key: size
             operator: In
             values:
               - Large

They key value pairs are in the form key, operator and values. Where the operator is In. The In operator ensures that the POD will be placed on a node whose label size has any value in the list of values specified here.

Here in this case it is just one called Large. If you think your POD could be placed on a large or a medium node, you could simply specify add the value to the list of values

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod

spec:
  containers:
    - name: data-processor
      image: data-processor
   affinity:
   nodeAffinity:
     requiredDuringSchedulingIgnoredDuringExecution:
       nodeSelectorTerms:
         - matchExpressions:
           - key: size
             operator: In
             values:
               - Large
               - Medium
Node Affinity in Kubernetes

You could use the NOT IN operator to say something like size not in small

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod

spec:
  containers:
    - name: data-processor
      image: data-processor
   affinity:
   nodeAffinity:
     requiredDuringSchedulingIgnoredDuringExecution:
       nodeSelectorTerms:
         - matchExpressions:
           - key: size
             operator: NotIn
             values:
               - Small

There are a number of other operators as well. Check the documentation for more operators.

When the PODs are created, these rules are considered and the PODs are placed onto the right nodes. But what if node affinity could not match a node with a given expression?

In this case what if there are no nodes with a label called size. Say we had the labels and the PODs are scheduled. What if someone changes the label on the node at a future point in time. Will the POD continue to stay on the node?

All of this is answered by the long sentence like property under this feature. which happens to be the type of node affinity. The type of this feature defines the behavior of the scheduler with respect to node affinity and stages in the life cycle of the POD.

There are two types of available node affinity types available in Kubernetes.

  1. requiredDuringSchedulingIgnoredDuringExecution
  2. preferredDuringSchedulingIgnoredDuringExecution

When considering this feature there are two stages in the life cycle of pod.

  1. During Scheduling
  2. During Execution
1. During Scheduling

During Scheduling is the state where a pod does not exist and is created for the first time. If rules are available it will be scheduled into a particular node, we are clear about this. What if matching label are not available, for example, we forgot to label nodes. That’s where the type of Node Affinity used comes into play.

Type 1 node affinity of requiredDuringScheduling, if you select required type, the scheduler will mandate that pod is placed on node with given node affinity rules, if cannot find one, it cannot schedule. This type is used where the placement of pod is crucial, if matching node does not exist, the POD will not be schedule.

Type 2 node affinity of preferredDuringScheduling, it helps when the pod placement is less importance than running a workloads, in this type we choose preferred type, here scheduler will simply ignore node affinity rules and place the pod any available nodes.

2. DuringExecution

When pod has been running and a change is made in the environment that effects node Affinity, such changes like labels of nodes. For example say an administrator removed the label Size=Large from the node. Now what happens to pods that are running on that node? In this case again we got type 1 and type 2 both are at ignored phase in present node affinity types.

Type1 and type 2 that means pods will continue to run on the nodes. Any changes in node affinity rules will not impact them once they are scheduled.

There are two planned types

  1. requiredDuringSchedulingRequiredDuringExecution
  2. preferredDuringSchedulingRequiredDuirngExecution.

In these types only change comes in execution part phase, required is added that means whenever there is change, pod will be evicted or terminated.

Node Affinity vs Taints and Tolerations

For example we have 3 nodes and 3 PODs each in 3 colors green, red and blue. The ultimate goal is to place the green POD in the green node, red POD in the red node and blue POD in the blue node.

We are sharing same Kubernetes cluster with other teams. So there are other PODs in the cluster as well as other nodes.

Node Affinity vs Taints and Tolerations

Here we don’t want any other PODs to be placed on our 3 nodes, and also don’t want any out PODs to be placed on other nodes.

Let’s first try to solve this problem using taints and tolerations. We apply a taint to the nodes marking them with their colors. Now we apply tolerations on the PODs to tolerate the respective colors.

When the PODs are now created, the nodes ensure only accept the PODs with the right toleration. So blue POD inserted on blue node, green POD inserts green node. How ever taints and tolerations doesn’t guarantee that the PODs only prefer these nodes. So the red POD inserts one of the other node that don’t have taints and tolerations. This is not desired.

Node Affinity vs Taints and Tolerations

Now let’s try to solve same problem using Node affinity. With node affinity we first labeled the nodes with their respective colors, we then set node selectors on the PODs to tie the PODs to the nodes. So PODs will be placed on right nodes.

However that doesn’t guarantee that other PODs are not placed on our nodes. In this case there is a chance that one of the other PODs may end up on our nodes. This is not something we desire.

Node Affinity vs Taints and Tolerations

As such a combination of taints & tolerations and node affinity rules can be used together to completely dedicate nodes for specific PODs. We first use taints and tolerations to prevent other PODs from being placed on our nodes and then we use node affinity to prevent our PODs from being placed on other nodes.

Node Affinity
Scroll to top