Essentially, Kubernetes exposes a backend service internally within the cluster and the front-end interacts with this service. The pods that offer the service can very well be replaced and no one would notice anything. But as the features in your applications grow, the number of services that you need to maintain also grows. Each service can potentially talk to every other service in the cluster, and the resulting network is termed as Service Mesh.
There are a lot add-ons for Kubernetes to help us simplify the management of this Service Mesh. A lot of keys features like TLS, automated Load Balancing, securing APIs even on the internal network, etc are offered by these add ons. Many options such as Istio, Linkerd, and Conduit can be integrated with Kubernetes to accomplish this. We will be looking into Istio in this post since it’s version 1.0 was recently announced.
To get started with Istio, you would need a working kubernetes cluster. There are three ways to get that.
- You can install Minikube to create a single node cluster on your local machine.
- Or, if you are using Docker on Windows or Mac you can enable a single-node Kubernetes cluster in Docker settings.
- Or you can use online services like Katacoda playground. We will be using this.
Why use a Service Mesh?
Installing a service mesh, like Istio makes working with microservices easy. While developing you don’t have to worry about the fact that your microservice would have to offer support for mutual TLS, load balancing or any other aspect such as service discovery. An ideal Service Mesh allows you to connect microservices, secure them from one another and from the outside world, and manage them in an organized way. It helps both the developers and the operators immensely.
Installing Istio requires having a Kubernetes cluster. If you have a single node cluster like you get with Minikube or Docker on Desktop then all the commands can be run on your local node. However, if you are using a multi-node cluster like the one that Katacoda playground offers then keep in mind that most of the commands and set up procedures are done on the master node. Yes, it affects the entire cluster, but we need to interact solely with the master node.
We begin with cloning (or downloading) the latest release of Istio from Github. Windows users might want to visit this page and get the appropriate .zip file.
$ cd istio-1.0.0
The repo’s name may change over time as newer release come through, at the time of this writing 1.0.0 is the latest stable release. This repo contains not only the service mesh extension but also a sample app called BookInfo for experimentation purposes. The script also adds the new directory $PWD/istio-1.0.0/bin to your PATH variable.
This directory contains istioctl binary which can be used to interact with the cluster. Windows users can simply call the binary by going to the folder istio-1.0.0bin and calling .istioctl using powershell or command prompt. But it is an optional add-on.
If you are using Mac you can do that using the following command:
Next we need to extend our Kubernetes API with Custom Resource Definitions (CRDs) that istio provides us with.
This might take effect in a few seconds and once it is done your kube-apiserver will have Istio extensions built into it. From here on, the installation options vary depending on whether you are using this for production purposes or if you are experimenting with it in your own isolated environment.
We are going to assume the latter is the case, and install istio without TLS authentication.
This will create a new namespace istio-system where all the various components like istio-pilot and ingress gateway will be installed.
Application Deployment and Istio Injector
Here comes the utility of Istio. Istio adds sidecar proxies to your services, and this is done without modifying the actual code of your application. If automatic istio-sidecar-injector is enabled. You can label a namespace with istio-injection=enabled and when your application is deployed on this namespace the pods themselves will have specialized Envoy containers along with the containers for the core application. For example, let’s label the default namespace
Now let’s deploy the sample BookInfo app in this namespace. From the root directory of the Isitio rep that we cloned, run:
You can list all the pods running here:
Pick any pod out of the ones and see its details. For example, one of the pods from the BookInfo app in my deployment is named details-v1-6865b9b99d-6mxx9
In the description, you will notice that the pod contains two containers, first is a component of the actual running the image app examples-bookinfo-details-v1:1.8.0 and the second is the istio-proxy running the image gcr.io/istio-release/proxyv2:1.0.0 .
Istio offers fine grained control over your service mesh because it injects these containers down to the very pods where your applications reside. This combined with easy to use TLS for communication and fine grained traffic control is one of the many reasons why large applications can benefit from a service mesh like Istio.
The actual architecture has a lot of components like Pilot, Citadel and Mixer each with its own important role to perform. You can learn a lot more about these components here and try to deploy your own microservice here.