My experience on setting up Istio locally

Recently Istio(means ‘sail’ in Greek) was announced, an open source platform that can manage, connect and secure your microservice. It packages tons of features like:

  • Load balancing
  • Metrics collection
  • Logs collection
  • Tracing
  • Request routing
  • Discovery and load balancing
  • Fault injection
  • Rate limiting
  • Auth
  • and much more…

Note: On official documentation the way to install minikube for mac users is by executing:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.19.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

however if you are using home-brew + cask then you can just use:

brew cask install minikube

I wanted to give it a try so headed straight and went through istio-docs. In no time everything was up and running locally. Then thought to deploy BookInfo, a sample app that ships with Istio and while exporting GATEWAY_URL

export GATEWAY_URL=$(kubectl get po -l istio=ingress -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc istio-ingress -o jsonpath='{spec.ports[0].nodePort}')

I encountered

error: name cannot be provided when a selector is specified
error: error executing jsonpath “{spec.ports[0].nodePort}”: unrecognized identifier spec
export: not valid in this context: template:
zsh: not an identifier: map[string]interface
zsh: not an identifier: map[string]interface

In order to figure out where the problem is, I quickly broke this single command in bunch of smaller commands and verified that there was no issue in producing JSON by executing:

kubectl get po -l istio=ingress -o json

Note: To see all the available options execute :

kubectl get po -l istio=ingress -o

My small investigation lead me to believe that the culprit was jsonpath. After some initial research I came across a github issue, after reading one of the comments made by Justin Garrison:

I was upgrading to 1.3.0 and was going to test when @2opremio made me realize zsh has a built in to expand (usually numbers) inside { }
Just opened #1651 with quotes to fix it. Tested on bash and zsh. Thanks for the help debugging everyone, sorry for the trouble.

It was clear that issue was ZSH specific, after applying the suggested solution

export GATEWAY_URL=$(kubectl get po -l istio=ingress -o jsonpath='{.items[0].status.hostIP}'):$(kubectl get svc istio-ingress -o jsonpath='{spec.ports[0].nodePort}')

Still I was receiving bunch of errors

error: error executing jsonpath “{spec.ports[0].nodePort}”: unrecognized identifier spec
export: not valid in this context: template:
zsh: not an identifier: map[string]interface
zsh: not an identifier: map[string]interface

Digging a bit deeper, lead me to change the syntax of the original command

export GATEWAY_URL=$(kubectl get po -l istio=ingress -o jsonpath='{.items[0].status.hostIP}'):$(kubectl get svc istio-ingress -o jsonpath='{.spec.ports[0].nodePort}')

And finally, everything worked, you can verify it by executing:

echo $GATEWAY_URL

Or just head over to the browser

bookstore

BookInfo – main page

2

BookInfo – Grafana metrics

Screen Shot 2017-05-25 at 13.49.14

BookInfo – Zipkin dashboard

3

BookInfo – Zipkin trace

4

BookInfo – generated graph

Advertisements

Docker on AWS , tweaking kernel params

Few months before we came up with an idea to scale our Jenkins infrastructure by dynamically launching new containers on AWS cloud in order to build and verify pull requests. It would help our developers to avoid long waiting time (on average 4-6 h!!!) before their pull requests could be merged, since we had limited number of Jenkins slaves and every time we wanted to add a new slave, our Ops have to go all over again to set up a new Jenkins slave.

From developer’s point of view, it means packing one really big and fat monolithic app (along with all its dependencies like messaging queues, database server, configuration management, backend API, validation API, etc.) on a docker container.  All those services need to run locally inside a container, that way we could scale on demand, i.e. launch N number of such containers depending upon the number of pull requests. Hence, no waiting time (as soon as the pull request arrives, Jenkins can start building  it).

We first started development locally on our laptops and after battling for weeks, everything worked like a charm on developer’s machines.

Then we tried to launch our docker container on AWS cloud. We thought that everything would work, since we had already built and tested docker containers locally on multiple flavors of Linux versions running different kernel versions, but the reality was quite the opposite, after launching the container it just halted for some strange reasons.

giphy

 

After investigating, we found the culprits were kernel parameters, to be precise, too small shmmax and shmall parameters . You may ask, why a docker container required more shmmax and shmall. Remember our big and fat monolithic application ? It required a lot of dependencies to be installed inside the container, few of those dependencies required access to set bigger values for shmmax and shmall in init-scripts.

In order to run those applications, we were forced to launch docker in privileged mode like this

docker run --privileged -it someDockerImage

After that, we could run those applications without modifying init-scripts.

Note: Although the above solution works, if possible one should really avoid launching containers in privileged mode because root inside docker container == root on host.

Upgrade docker to latest version on mac

Docker toolbox install number of tools for us, such as:

  • Docker Client – Use to connect with docker daemon
  • Docker Machine – To run docker-machine binary
  • Docker Engine – To run docker binary
  • Docker Compose – To run docker-compose binary
  • Docker Kitematic – Docker GUI
  • VirtualBox – To boot virtual machine

To upgrade all dependencies to their latest versions, download docker-toolbox and install it (double click, and follow installation instruction).

If you are interested in upgrading docker’s version on a particular virtual machine, which is running docker daemon, then use docker-machine.

To brush up some basic knowledge, we can’t execute docker natively on mac (remember docker daemon needs access to Linux-specific kernel features), that’s where docker-machine comes into action. We can use docker-machine to both create and attach to Linux virtual machine, which can provide the host environment for running docker.

First, we need to find machines’ s name on which we want to upgrade docker’s version. We can execute  docker-machine ls to list all machines. E.g.

docker-machine-ls

In my case, I want to upgrade docker’s version on “default” machine. In order to do it, execute docker-machine upgrade <yourMachineName>

docker-machine-upgrade-default

Note: make sure the machine that you want to upgrade is up and running, otherwise you will get an error like this.

docker-machine-upgrade-error

That’s it, docker is upgraded to the latest version, ssh to docker machine by executing docker-machine ssh <yourMachineName> and you will see the latest docker version.

docker-machine-upgraded