CNI in Kubernetes


CNI in Kubernetes

In this tutorial, we are going to discuss about CNI (Container Network Interface) in Kubernetes.

In the previous tutorials, we started all the way from the absolute basics of network namespaces, then we saw how it is done in Docker, we then discussed why you need standards for networking containers and how the container network interface came to be and then we saw a list of supported plugins available with CNI.

So, in this tutorial we will see how Kubernetes is configured to use these network plugins. As we discussed in the previous tutorials, CNI defines the responsibilities of container runtime.

As per CNI, container runtimes, in our case Kubernetes, is responsible for creating container network namespaces, identifying and attaching those namespaces to the right network by calling the right network plugin.

Configuring CNI

So where do we specify the CNI plugins for Kubernetes to use? The CNI plugin must be invoked by the component within Kubernetes that is responsible for creating containers. Because that component must then invoke the appropriate network plugin after the container is created.

The CNI plugin is configured in the kubelet service on each node in the cluster. If you look at the kubelet service file, you will see an option called network-plugin set to CNI.

CNI in Kubernetes
View Kubelet Options

You can see the same information on viewing the running kubelet service. You can see the network plugins set to CNI and a few other options related to CNI such as the CNI bin directory and CNI Config directory.

CNI in Kubernetes

The CNI bin directory has all the supported CNI plugins as executables. Such as the bridge, dhcp, flannel etc. The CNI conflig directory has a set of configuration files. This is where kubelet looks to find out which plugin needs to be used.

In this case it finds the bridge configuration file. If there are multiple files here, It will choose the one in alphabetical order.

If you look at the bridge conf file, it looks like following. This is a format defined by the CNI standard for a plugin configuration file. It’s name is mynet, type is bridge.

$ ls /etc/cni/net.d
10-bridge.conf

$ cat /etc/cni/net.d/10-bridge.conf
{
    "cniVersion": "0.2.0",
    "name": "mynet",
    "type": "bridge",
    "bridge": "cni0",
    "isGateway": true,
    "ipMasq": true,
    "ipam": {
        "type": "host-local",
        "subnet": "10.22.0.0/16",
        "routes": [
            { "dst": "0.0.0.0/0" }
        ]
    }
}

It also has a set of other configurations which can be related to the concepts we discussed in the previous tutorials on bridging, routing and masquerading in NAT.

The is Gateway defines whether the bridge network interface should get an IP address assigned so it can act as a gateway.

The IP masquerade defines if a NAT rule should be added for IP masquerading. The IPAM section defines IPAM configuration. This is where you specify the subnet or the range of IP addresses that will be assigned to PODs and any necessary routes.

The type host-local indicates that the IP addresses are managed locally on this host. Unlike a DHCP server maintaining it remotely. The type can also be set to DHCP to configure an external DHCP server.

CNI in Kubernetes
Scroll to top