Prepare Minikube With Istio
In this tutorial, we are going to discuss more about how to prepare minikube with Istio and the Istio installation as well as the sample application we installed as a part of the minikube installation.
In the previous tutorial I installed Istio in my cluster node and added that as a part of the path. So that the Istioctl will work from any location. So whenever I provide the command, istioctl its going to provide the notes on the usage.
root@cluster-node:~# istioctl Istio configuration command line utility for service operators to debug and diagnose their Istio mesh. Usage: istioctl [command] Available Commands: admin Manage control plane (istiod) configuration analyze Analyze Istio configuration and print validation messages bug-report Cluster information and log capture support tool. dashboard Access to Istio web UIs experimental Experimental commands that may be modified or deprecated help Help about any command install Applies an Istio manifest, installing or reconfiguring Istio on a cluster. kube-inject Inject Envoy sidecar into Kubernetes pod resources manifest Commands related to Istio manifests operator Commands related to Istio operator controller. profile Commands related to Istio configuration profiles proxy-config Retrieve information about proxy configuration from Envoy [kube only] proxy-status Retrieves the synchronization status of each Envoy in the mesh [kube only] upgrade Upgrade Istio control plane in-place validate Validate Istio policy and rules files verify-install Verifies Istio Installation Status version Prints out build version information ....................... .......................
That means the istioctl its able to recognize. I can check the status of minikube.
root@cluster-node:~# minikube status minikube type: Control Plane host: Stopped kubelet: Stopped apiserver: Stopped kubeconfig: Stopped
It is in the stopped state. I can go ahead and start them.
root@cluster-node:~# minikube start 😄 minikube v1.20.0 on Ubuntu 20.04 (vbox/amd64) ▪ KUBECONFIG=/var/snap/microk8s/current/credentials/kubelet.config ✨ Using the none driver based on existing profile 👍 Starting control plane node minikube in cluster minikube 🔄 Restarting existing none bare metal machine for "minikube" … ℹ️ OS release is Ubuntu 20.04.2 LTS 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.2 … ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf 🤹 Configuring local host environment … ❗ The 'none' driver is designed for experts who need to integrate with an existing VM 💡 Most users should use the newer 'docker' driver instead, which does not require root! 📘 For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/ ❗ kubectl and minikube configuration will be stored in /root ❗ To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
▪ sudo mv /root/.kube /root/.minikube $HOME ▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube💡 This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true 🔎 Verifying Kubernetes components… ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 🌟 Enabled addons: default-storageclass, storage-provisioner 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
If any POD if it is deployed, for example the sample applications it will be existing and it’s going to restart them.
The very important point to be noted, Let me remind once again always make sure the istio-injection=enabled label is added to the namespace where you are going to work with Istio. In our case, we enabled the Istio injection as a part of the default namespace.
I can make sure and verify whether the label is added using the describe command
root@cluster-node:~# kubectl describe namespace default Name: default Labels: injection=enabled istio-injection=enabled Annotations: Status: Active No resource quota. No LimitRange resource.
So here I do have the label istio-injection=enabled. So any POD that is getting created within this specific namespace will have two containers within it. That’s why any POD that is getting created within this namespace will be having two containers.
Get PODs and Deployments
Let me go ahead and get all the PODs and deployments available. As I have already installed the Istio I do have the deployment for the Istio system. And when it comes to the pods I do have the pods for the Istio system as well as the kube system.
root@cluster-node:~/istio-1.10.0# kubectl get deployment --all-namespaces NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE istio-system istio-egressgateway 1/1 1 1 17h istio-system istio-ingressgateway 1/1 1 1 17h istio-system istiod 1/1 1 1 17h kube-system coredns 1/1 1 1 19h
root@cluster-node:~/istio-1.10.0# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE istio-system istio-egressgateway-585f7668fc-x5wvq 1/1 Running 1 17h istio-system istio-ingressgateway-8657768d87-w7gdm 1/1 Running 1 17h istio-system istiod-56874696b5-rqftq 1/1 Running 1 17h kube-system coredns-74ff55c5b-7hr6l 1/1 Running 1 19h kube-system etcd-cluster-node 1/1 Running 1 19h kube-system kube-apiserver-cluster-node 1/1 Running 1 19h kube-system kube-controller-manager-cluster-node 1/1 Running 1 19h kube-system kube-proxy-g2qth 1/1 Running 1 19h kube-system kube-scheduler-cluster-node 1/1 Running 1 19h kube-system storage-provisioner 1/1 Running 2 19h
These are all the pods that are required for the Kubernetes to work and got started because of the minikube installation.
We will be having detailed discussion about each and every POD and its role later when we are discussing about the architecture.
Sample Application Example
Now, let me go ahead and add the sample application by running the bookinfo.yaml file available as a part of samples folder.
I’m going to run the book info yaml file available within the samples folder.
root@cluster-node:~/istio-1.10.0# kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created
This is going to deploy all the required entities like deployments, services, service account, pods and all the entities would get created.
root@cluster-node:~/istio-1.10.0# kubectl get pods NAME READY STATUS RESTARTS AGE details-v1-79f774bdb9-km2kh 2/2 Running 0 2m20s productpage-v1-6b746f74dc-4nrzr 2/2 Running 0 2m20s ratings-v1-b6994bb9-rrdqc 2/2 Running 0 2m20s reviews-v1-545db77b95-6dccl 2/2 Running 0 2m20s reviews-v2-7bf8c9648f-njnkh 2/2 Running 0 2m20s reviews-v3-84779c7bbc-qpjfm 2/2 Running 0 2m20s
Finally, it’s going to create two containers within the pod. As we discussed earlier It’s going to inject the proxy container as a part of each and every pod.
The reason, because we do have the label istio-injection=enabled added as part of the default namespace. I can get the details about any single pod using describe pod command. So within the pod I do have two containers. One is istio proxy container and another one is bookinfo details.
Now, this particular application is up and running and we need to have a way to access this particular application. I can go ahead and use the curl and access in the port 9080. That’s where the service is running.
Let me go ahead and get all the services running using the command kubectl get services.
root@cluster-node:~/istio-1.10.0# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE details ClusterIP 10.98.97.147 <none> 9080/TCP 9m19s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h productpage ClusterIP 10.105.162.110 <none> 9080/TCP 9m19s ratings ClusterIP 10.104.111.31 <none> 9080/TCP 9m19s reviews ClusterIP 10.102.103.237 <none> 9080/TCP 9m19s
So the product page is running in the port 9080 and it is accessible from the cluster IP address 10.105.162.110. If I have a external IP address then I can access it directly or I need to expose this to access the application.