Ingress in Kubernetes


Ingress in Kubernetes

In this tutorial, we are going to discuss about ingress in Kubernetes. One of the common questions that most people reach out about usually is regarding services and ingress. What’s the difference between the two and when to use what.

So we’re going to briefly revisit services and work our way towards ingress. We will start with a simple scenario. You are deploying an application on Kubernetes for a company that has an online store selling products.

Your application would be available at say my-online-store.com. You build the application into a Docker Image and deploy it on the Kubernetes cluster as a POD in a Deployment.

ClusterIP Service

Your application needs a database so you deploy a MySQL database as a POD and create a service of type ClusterIP called mysql-service to make it accessible to your application.

NodePort Service

So your application is now working. To make the application accessible to the outside world, you create another service, this time of type NodePort and make your application available on a high-port on the nodes in the cluster.

Ingress in Kubernetes example

In this example a port 38080 is allocated for the service. The users can now access your application using the URL http: //<node-ip>:38080.

Now this setup works and users are able to access the application. Whenever traffic increases, we increase the number of replicas of the POD to handle the additional traffic and the service takes care of splitting traffic between the PODs.

However if you have deployed a production grade application before you know that there are many more things involved in addition to simply splitting the traffic between the PODs.

For example we do not want the users to have to type in IP address every time you configure your DNS server to point to the IP of the nodes your users can now access your application using the URL my-online-store.com and port 38080.

Now you don’t want your users to have to remember port number either. However service node ports can only allocate high numbered ports which are greater than 30000.

So you then bring in an additional layer between the DNS server and your cluster like a proxy server that proxies requests on port 80 to port 38080 on your nodes.

You then point your DNS to this server, and users can now access your application by simply visiting my-online-store.com.

Ingress in Kubernetes example

Now this is if your application is hosted on premises in your data center.

Public cloud environment

Let’s take a step back and see what you could do if you were on a public cloud environment like Google Cloud Platform.

In that case, instead of creating a service of type NodePort for your wear application, you could set it to type load balancer.

When you do that Kubernetes would still do everything that it has to do for a NodePort, which is to provision a high port for the service.

But in addition to that Kubernetes also sends a request to Google Cloud Platform to provision a network load balancer for the service.

On receiving the request GCP would then automatically deploy a load balancer configured to route traffic to the service ports on all the nodes and return its information to Kubernetes.

The LoadBalancer has an external IP that can be provided to users to access the application. In this case we set the DNS to point to this IP and users access the application using the URL my-online-store.com.

Ingress in Kubernetes example
Deploying new service

Now your company’s business grows and you now have new services for your customers. For example a video streaming service you want your users to be able to access your new video streaming service by going to my-online-store.com/watch.

You’d like to make your old application accessible at my-online.store.com/wear. Your developers developed the new video streaming application as a completely different application as it has nothing to do with the existing one.

However in order to share the same cluster resources, you deploy the new application as a separate deployment within the same cluster. You create a service called video-service of type LoadBalancer.

Kubernetes provisions port 38282 for this service and also provisions a Network LoadBalancer on the cloud. The new load balancer has a new IP remember you must pay for each of these load balancers and having many such load balancers can inversely affect your cloud build.

So how do you direct traffic between each of these load balancers based on the URL that the users type in.? Here you need yet another proxy or load balancer that can redirect traffic based on URLs to the different services.

Ingress in Kubernetes example

Every time you introduce a new service, you have to reconfigure the load balancer.

Enable SSL

Finally you also need to enable SSL for your applications so your users can access your application using https. Where do you configure that?

It can be done at different levels either at the application level itself or at the load balancer or proxy server level but which one? You don’t want your developers to implement it in their application as they would do it in different ways.

You want it to be configured in one place with minimal maintenance. Now that’s a lot of different configuration and all of this becomes difficult to manage when your application scales.

It requires involving different individuals and different teams. You need to configure your firewall rules for each new service and it’s expensive as well as for each service in you cloud native load balancer needs to be provision.

Wouldn’t it be nice if you could manage all of that within the Kubernetes cluster, and have all that configuration as just another Kubernetes as definition file that lives along with the rest of your application deployment files. That’s where ingress comes in ingress helps your users access your application.

Ingress

Using a single Externally accessible URL, that you can configure to route to different services within your cluster based on the URL path, at the same time implement SSL security as well.

Simply put, think of ingress as a layer 7 load balancer built-in to the Kubernetes cluster that can be configured using native Kubernetes primitives just like any other object in Kubernetes.

Now remember, even with Ingress you still need to expose it to make it accessible outside the cluster so you still have to either publish it as a NodePort or with a cloud native load balancer. But that is just a one time configuration.

Going forward you are going to perform all your load balancing, Authentication, SSL and URL based routing configurations on the Ingress controller.

So how does it work? What is it? Where is it? How can you see it and h can you configure it? How does it load balance? And how does it implement SSL? Without ingress, how would you do all of these?

Load balancing solutions

I would use a reverse-proxy or a load balancing solution like NGINX or HAProxy or Traefik. I would deploy them on my Kubernetes cluster and configure them to route traffic to other services.

The configuration involves defining URL Routes, configuring SSL certificates etc. Ingress is implemented by Kubernetes in kind of the same way.

You first deploy a supported solution, which happens to be any of these listed above and then specify a set of rules to configure ingress.

The solution you deploy is called as an ingress controller and the set of rules you configure are called as ingress resources.

Ingress resources are created using definition files like the ones we use to create pods deployments and services earlier in these tutorials.

Now remember a Kubernetes cluster does NOT come with an Ingress Controller by default. If you setup a cluster you won’t have an ingress controller built into it.

So if you simply create ingress resources and expect them to work they won’t. let’s look at each of these in a bit more detail.

Ingress Controller

As I mentioned you do not have an Ingress Controller on Kubernetes by default. So you must deploy one. What do you deploy?

There are a number of solutions available for ingress. A few of them being GCE – which is Googles Layer 7 HTTP Load Balancer. NGINX, Contour, HAPROXY, TRAFIK and Istio. Out of this, GCE and NGINX are currently being supported and maintained by the Kubernetes project.

And in this tutorial we will use NGINX as an example. These Ingress Controllers are not just another load balancer or nginx server. The load balancer components are just a part of it.

The Ingress controllers have additional intelligence built into them to monitor the Kubernetes cluster for new definitions or ingress resources and configure the nginx server accordingly.

An NGINX Controller is deployed as just another deployment in Kubernetes. So we start with a deployment file definition, named nginx-ingress-controller. With 1 replica and a simple pod definition template.

We will label it nginx-ingress and the image used is nginx-ingress-controller with the right version.

Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ingress-controller
spec:
  replicas: 1
  selector:
    matchLabels:
      name: nginx-ingress
  template:
    metadata:
      labels:
        name: nginx-ingress
    spec:
      serviceAccountName: ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: >-
            quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
          args:
            - /nginx-ingress-controller
            - '--configmap=$(POD_NAMESPACE)/nginx-configuration'
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443

Now this is a special build of NGINX built specifically to be used as an ingress controller in Kubernetes. So it has its own set of requirements.

Within the image the nginx program is stored at location /nginx-ingress-controller. So you must pass that as the command to start the nginx-controller-service.

If you have worked with NGINX before, you know that it has a set of configuration options such as the path to store the logs, keep-alive threshold, ssl settings, session timeout etc.

In order to decouple these configuration data data from the nginx-controller image, you must create a ConfigMap object and pass that in.

ConfigMap
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration

Now remember the ConfigMap object need not have any entries at this point. A blank object will do. But creating one makes it easy for you to modify a configuration setting in the future.

You will just have to add it in to this ConfigMap and not have to worry about modifying the nginx configuration files.

You must also pass in two environment variables that carry the POD’s name and namespace it is deployed to. The nginx service requires these to read the configuration data from within the POD.

And finally specify the ports used by the ingress controller which happens to be 80 and 443.

We then need a service to expose the ingress controller to the external world. So we create a service of type NodePort with the nginx-ingress label selector to link the service to the deployment.

apiVersion: v1
kind: Service
metadata:
  name: ingress
spec:
  type: NodePort
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
      name: http
    - port: 443
      targetPort: 443
      protocol: TCP
      name: https
  selector:
    name: nginx-ingress

As mentioned before, the Ingress controllers have additional intelligence built into them to monitor the Kubernetes cluster for ingress resources and configure the underlying nginx server when something is changed.

But for the ingress controller to do this it requires a service account with a right set of permissions for that we create a service account with the correct roles and roles bindings.

So to summarize, with a deployment of the nginx-ingress image, a service to expose it, a ConfigMap to feed nginx configuration data, and a service account with the right permissions to access all of these objects.

We should be ready with an ingress controller in its simplest form.

Ingress Resources

Now on onto the next part of creating ingress resources. An ingress resource is a set of rules and configurations applied on the ingress controller.

You can configure rules to say simply forward all incoming traffic to a single application or route traffic to different applications based on the URL.

So if user goes to my-online-store.com/wear, then route to one app, or if the user visits the /watch URL then route to the video app etc or you could route user based on the domain name itself.

For example, if the user visits wear.my-online-store.com, the route to the wear app or else route to the video app. Let us look at how to configure these in a bit more detail.

The Ingress Resource is created with a Kubernetes Definition file. In this case, ingress-wear.yaml.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-wear
spec:
  backend:
    serviceName: wear-service
    servicePort: 80

So the traffic is, of course, routed to the application services and not PODs directly. As you might know already the back end section defines where the traffic will be routed to.

So if it’s a single backend then you don’t really have any rules. You can simply specify the service name and port of the backend wear service.

You create the ingress resource by using following command

$ kubectl create -f ingress-wear.yaml
ingress.extensions/ingress-wear created

To view the created ingress by running the following command

$ kubectl get ingress

The new ingress is now created and routes all incoming traffic directly to the wear-service.

Ingress Resource Rules

Now you can use rules, when you want to route traffic based on different conditions. For example you create one rule for traffic originating from each domain or hostname.

Ingress Resource Rules

That means when users reach your cluster using the domain name, my-online-store.com, you can handle that traffic using rule1.

When users reach your cluster using domain name wear.my-online-store.com, you can handle that traffic using a separate Rule2.

Use Rule3 to handle traffic from watch.my-online-store.com and say use a 4th rule to handle everything else.

Now within each rule you can handle different paths. For example, within Rule 1 you can handle the wear path to route that traffic to the clothes application.

And a watch path to route traffic to the video streaming application and a third path that routes anything other than the first two to a 404 not found page.

Similarly, the second rule handles all traffic from wear.my-online-store.com. You can have path definition within this rule, to route traffic based on different paths.

For example, say you have different applications and services within the apparel section for shopping, or returns, or support, when a user goes to wear.my-online.store.com/, by default they reach the shopping page. But if they go to exchange or support URL, they reach different backend services.

The same goes for Rule 3, where you route traffic to watch.my-online-store.com to the video streaming application. But you can have additional paths in it such as movies or tv.

And finally anything other than the ones listed here will go to the fourth rule, that would simply show a 404 Not Found Error page.

So remember you have rules at the top for each host or domain name and within each rule you have different paths to route traffic based on the URL.

Configure ingress resources

Now, let’s look at how we configure ingress resources in Kubernetes. We will start where we left off. We start with a similar definition file. This time under spec, We start with a set of rules.

Now our requirement here is to handle all traffic coming to my-online-store.com and route them based on the URL path.

So we just need a single rules for this. since we are only handling traffic to a single domain name, which is my-online-store.com. Under rules we have one item, which is an http rule in which we specify different paths. So paths is an array of multiple items.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-wear-watch
spec:
  rules:
    - http:
        paths:
          - path: /wear
            backend:
              serviceName: wear-service
              servicePort: 80
          - path: /watch
            backend:
              serviceName: watch-service
              servicePort: 80

Create the ingress resource using the kubectl create command.

Describe Ingress Resource

Once created, view additional details about the ingress resource by running the following command

$ kubectl describe ingress ingress-wear-watch
Name:             ingress-wear-watch
Namespace:        default
Address:
Default backend:  default-http-backend:80 ()
Rules:
  Host        Path  Backends
  ----        ----  --------
  *
              /wear    wear-service:80 ()
              /watch   watch-service:80 ()
Annotations:  
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  CREATE  23s   nginx-ingress-controller  Ingress default/ingress-wear-watch

You now see two backend URLs under the rules, and the backend service they are pointing to.

Now if you look closely in the output of above command you see that there is something about a default backend. What might that be?

If a user tries to access a URL that does not match any of these rules, then the user is directed to the service specified as the default backend. In this case it happens to be a service named default-http-backend.

So you must remember to deploy such a service back in your application, say a user visits the URL my-online-store.com/listen or /eat and you don’t have an audio streaming or a food delivery service. You might want to show them a nice message.

You can do this by configuring a default backend service to display this 404 Not found error page.

The third type of configuration is using domain names or host names. We start by creating a similar definition file for ingress.

host field

Now that we have two domain names we create two rules one for each domain. Display traffic by domain name, we use the host field.

The host field in each rule matches the specified value with the domain name used in the request URL and routes traffic to the appropriate backend.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-wear-watch
spec:
  rules:
    - host: wear.my-online-store.com
      http:
        paths:
          - backend:
              serviceName: wear-service
              servicePort: 80
    - host: watch.my-online-store.com
      http:
        paths:
          - backend:
              serviceName: watch-service
              servicePort: 80

Now remember in the previous case we did not specify the host field.

If you don’t specify the host field it will simply consider it as a * or accept all the incoming traffic through that particular rule without matching the hostname.

In this case, note that we only have a single backend path for each rule which is fine. All traffic from these domain names will be routed to the appropriate backend irrespective of the URL path.

You can still have multiple path specifications in each of these to handle different URL paths as we saw in the example earlier. So let’s compare the two.

Splitting traffic by URL had just one rule and we split the traffic with two paths. To split traffic by hostname, We used two rules and one path specification in each rule.

Ingress in Kubernetes
Scroll to top