Null Safety – Calling Java From Kotlin?

JVM already provides a safety net in the form of  bytecode verification, buffer overflow, type safety, etc.; Kotlin took this to a step further and baked null safety right into the type system. Which means we can deal with null at compile time rather than bumping into Null Pointer Exception. Nevertheless, we can still encounter NPE by:

  • Invoking external Java’s code which in turn can throw NPE.
  • Using !! operator
  • Explicitly throwing NPE
  • Using uninitialised this in a constructor (data inconsistency)

In this post, we will focus on how to take advantage of null safety mechanism when invoking Java’s code.

Java’s declarations are treated as platform(flexible) types in Kotlin. These types cannot be mentioned explicitly in the program, i.e. if we try to declare a variable of platform type, we will receive compilation error. E.g.

Screen Shot 2017-10-26 at 09.28.52.png

Compilation error – trying to explicitly declaring platform type in the program

From Kotlin’s compiler perspective, it is a type that can be used as both nullable and non-nullable. Hence, there is no syntax in the language to represent them. The following mnemonic notation can be used to denote them:

notation.png

When we try to invoke Java’s code from Kotlin; null checks are relaxed due to platform types. The way to leverage existing null-safety mechanism is by representing platform types as an actual Kotlin type (nullable or non-nullable). Digging in the source code of Kotlin’s compiler one can find various flavour of nullability annotation that aims to achieve this. Let’s see an example on how to use them.

We have a class called Message.java


public class Message {
private final String greetingMessage = "Hello ";
public String getEchoMessage() {
return greetingMessage;
}
}

view raw

Message.java

hosted with ❤ by GitHub

When we invoke getEchoMessage() from kotlin, the compiler infers it as platform type. The infer type String! means the variable message may or may not have a String value. Due to this reason, the compiler cannot enforce us to do null handling on platform types.

Screen Shot 2017-11-01 at 06.06.17.png

Compiler infer it as platform type, hence no null-safety

Looking at the source code of getEchoMessage() one can spot that it always returns a non-nullable value. We can use @Nonnull annotation to document this. E.g.


import javax.annotation.Nonnull;
import javax.annotation.meta.When;
public class Message {
private final String greetingMessage = "Hello ";
@Nonnull(when = When.ALWAYS) //When.ALWAYS is default option and makes the annotated type as non-nullable
public String getEchoMessage() {
return greetingMessage;
}
}

view raw

Message.java

hosted with ❤ by GitHub

Now the Kotlin compiler will no longer infer it as a platform type.

Screen Shot 2017-11-01 at 06.13.07.png

Compiler can now infer it as kotlin.String

We can use one of the following values of When with @Nonnull:

  • When.ALWAYS – type will always be non-nullable. This is default option.
  • When.MAYBE/NEVER – type may be nullable.
  • When.UNKNOWN – type is resolved as platform one.

E.g.


import javax.annotation.Nonnull;
import javax.annotation.meta.When;
public class Message {
private final String greetingMessage = "Hello ";
@Nonnull
public String getEchoMessage() {
return greetingMessage;
}
@Nonnull(when = When.MAYBE)
public String getThirdPartyMessage() {
return fetchFromExternalService();
}
@Nonnull(when = When.UNKNOWN)
public String getDummyMessage() {
return "foo";
}
}

view raw

Message.java

hosted with ❤ by GitHub

Screen Shot 2017-11-01 at 08.11.24.png

Compiler inference for @Nonnull with different values of When

Refactoring large codebase – leverage JSR-305 support

Note: Starting from Kotlin 1.1.50 we can use custom nullability qualifiers (both @TypeQualifierNickname  and@TypeQualifierDefault are supported, for more details see this).

Let’s say we have a package in which majority of classes have methods that:

  • returns a non-nullable value.
  • takes non-nullable parameters.

Rather than editing each and every source file, we can introduce package level  nullability/non-nullability behaviour and only override the exceptional cases. Let see how to do it.

One can start by creating a custom annotation, let’s call it @NonNullableApi

Note: When creating custom annotation such as @NonNullableApi it is mandatory to annotate it with both @TypeQualifierDefault and JSR-305 annotation such as @Nonull, @Nullable, @CheckForNull, etc.

E.g.


import javax.annotation.Nonnull;
import javax.annotation.meta.TypeQualifierDefault;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
@Target(ElementType.PACKAGE)
@Nonnull
@TypeQualifierDefault({ElementType.METHOD, ElementType.PARAMETER})
@Retention(RetentionPolicy.RUNTIME)
public @interface NonNullableApi {
}

The following values of ElementType can be used with @TypeQualifierDefault(...) :

  • ElementType.FIELD – for fields
  • ElementType.METHOD – for methods return types
  • ElementType.PARAMETER – for parameters value

Next we need to apply our annotation for a particular package by placing it inside package-info.java. E.g.


@NonNullableApi
package api;
import annotation.NonNullableApi;

After that we may override default behaviour (if needed) for a particular class/method parameter. E.g.

public String greet2(@Nonnull(when = When.MAYBE) String name) {
return String.format("%s%s !!", greetingMessage, name);
}

Finally, we need to configure JSR-305 checks by passing the compiler flag -Xjsr305 in build.gradle(or in specify file for different build tool).


compileKotlin {
kotlinOptions.freeCompilerArgs = ["-Xjsr305=strict"]
}

view raw

build.gradle

hosted with ❤ by GitHub

The following compiler flags are supported:

  • -Xjsr305=strict – produce compilation error (experimental feature).
  • -Xjsr305=warn – produce compilation warnings (default behaviour)
  • -Xjsr305=ignore – do nothing.

Note: You can also pass those flags directly via command line in the absence of build tool.

E.g.-Xjsr305=warn

Screen Shot 2017-11-01 at 05.33.26.png

IDE view:-Xjsr305=warn results in compilation warning when supplying null value for a parameter

Screen Shot 2017-11-01 at 05.32.10.png

Build output -Xjsr305=warn results in successful build (with warning)

E.g.(-Xjsr305=strict )

Screen Shot 2017-11-01 at 05.46.25.png

IDE view:-Xjsr305=strict results in compilation error when supplying null value for a parameter

Screen Shot 2017-11-01 at 04.51.17

Build output:-Xjsr305=strict results in build failure

 

Screen Shot 2017-11-01 at 05.54.44

IDE view: compiler detected type mismatch

You can find the complete source for this post here

Spring Framework 5 + null safety

Spring framework 5 introduced null-Safety in their code-base (see this & this). Bunch of annotations like @NonNullApi@NonNullFields, etc. were introduced inside the package  org.springframework.lang. These annotations use the similar approach described above i.e. they are meta-annotated with JSR-305. Kotlin developers can use projects like Reactor, Spring-data-mongo, Spring-data-cassandra, etc. with null-safety support. Please note, currently null-safety is not targeted for:

  • Varargs.
  • Generic type arguments.
  • Array elements nullability.

However, there is an ongoing discussion which aims to cover them.

 

@Deprecated in Kotlin

In Kotlin, we can use @Deprecated Annotation to mark a class, function, property, variable or parameter as deprecated. What makes this Annotation interesting is not only the possibility to deprecate but also provide replacement with all the necessary imports. This comes handy for clients to upgrade the source code without digging into documentation. Apart from this we can also control the level of  depreciation. Let see it in action.

The minimal thing we need to mark a function as deprecated is to Annotate the function  and provide the deprecation message. Thanks to java interoperability we can use this Annotation within existing java code base.

@Deprecated(message = "we are going to replace with StringUtils.isEmpty)
public static boolean isEmpty(String input) {
return input.equals("");
}
Screen Shot 2017-10-22 at 12.42.18.png

Client view when invoking the method

It will be nice if we can have the possibility to provide the replacement code as part of IDE suggestion that can help a client to upgrade their code base. @ReplaceWith aims to achieve exactly this. It takes 2 arguments :

  1. expression – method call to replace with.
  2. imports – necessary import to make an expression compile.

E.g.

@Deprecated(message = "we are going to replace with StringUtils.isEmpty",
        replaceWith = @ReplaceWith(
                expression = "StringUtils.isEmpty(input)",
                imports = {"org.apache.commons.lang3.StringUtils"})
)
public static boolean isEmpty(String input) {
    return input.equals("");
}

Screen Shot 2017-10-22 at 12.51.19.png

IDE can now suggest the replacement method

Depending on the use case, we can also tweak the deprecation level for an immediate upgrade. Supported levels are:

  1. Warning – will result in compilation warning. It is also the default level.
  2. Error – will result in compilation error
  3. Hidden – deprecated method will be hidden in the code base (non-existing for the caller), but will be present at bytecode level. This is useful in scenarios where we want to pretend the method doesn’t exist at the source code level but want to keep it at bytecode level (due to compatibility reason).

E.g.

@Deprecated(level = DeprecationLevel.ERROR,
        message = "we are going to replace with StringUtils",
        replaceWith = @ReplaceWith(
                expression = "StringUtils.isEmpty(input)",
                imports = {"org.apache.commons.lang3.StringUtils"})
)
public static boolean isEmpty(String input) {
    return input.equals("");
}

 

Screen Shot 2017-10-22 at 13.16.13.png

Deprecation level – Error

E.g.

@Deprecated(level = DeprecationLevel.HIDDEN,
        message = "we are going to replace with StringUtils",
        replaceWith = @ReplaceWith(
                expression = "StringUtils.isEmpty(input)",
                imports = {"org.apache.commons.lang3.StringUtils"})
)
public static boolean isEmpty(String input) {
    return input.equals("");
}
Screen Shot 2017-10-22 at 13.32.18.png

Deprecation Level – Hidden

 

My experience on setting up Istio locally

Recently Istio(means ‘sail’ in Greek) was announced, an open source platform that can manage, connect and secure your microservice. It packages tons of features like:

  • Load balancing
  • Metrics collection
  • Logs collection
  • Tracing
  • Request routing
  • Discovery and load balancing
  • Fault injection
  • Rate limiting
  • Auth
  • and much more…

Note: On official documentation the way to install minikube for mac users is by executing:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.19.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

however if you are using home-brew + cask then you can just use:

brew cask install minikube

I wanted to give it a try so headed straight and went through istio-docs. In no time everything was up and running locally. Then thought to deploy BookInfo, a sample app that ships with Istio and while exporting GATEWAY_URL

export GATEWAY_URL=$(kubectl get po -l istio=ingress -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc istio-ingress -o jsonpath='{spec.ports[0].nodePort}')

I encountered

error: name cannot be provided when a selector is specified
error: error executing jsonpath “{spec.ports[0].nodePort}”: unrecognized identifier spec
export: not valid in this context: template:
zsh: not an identifier: map[string]interface
zsh: not an identifier: map[string]interface

In order to figure out where the problem is, I quickly broke this single command in bunch of smaller commands and verified that there was no issue in producing JSON by executing:

kubectl get po -l istio=ingress -o json

Note: To see all the available options execute :

kubectl get po -l istio=ingress -o

My small investigation lead me to believe that the culprit was jsonpath. After some initial research I came across a github issue, after reading one of the comments made by Justin Garrison:

I was upgrading to 1.3.0 and was going to test when @2opremio made me realize zsh has a built in to expand (usually numbers) inside { }
Just opened #1651 with quotes to fix it. Tested on bash and zsh. Thanks for the help debugging everyone, sorry for the trouble.

It was clear that issue was ZSH specific, after applying the suggested solution

export GATEWAY_URL=$(kubectl get po -l istio=ingress -o jsonpath='{.items[0].status.hostIP}'):$(kubectl get svc istio-ingress -o jsonpath='{spec.ports[0].nodePort}')

Still I was receiving bunch of errors

error: error executing jsonpath “{spec.ports[0].nodePort}”: unrecognized identifier spec
export: not valid in this context: template:
zsh: not an identifier: map[string]interface
zsh: not an identifier: map[string]interface

Digging a bit deeper, lead me to change the syntax of the original command

export GATEWAY_URL=$(kubectl get po -l istio=ingress -o jsonpath='{.items[0].status.hostIP}'):$(kubectl get svc istio-ingress -o jsonpath='{.spec.ports[0].nodePort}')

And finally, everything worked, you can verify it by executing:

echo $GATEWAY_URL

Or just head over to the browser

bookstore

BookInfo – main page

2

BookInfo – Grafana metrics

Screen Shot 2017-05-25 at 13.49.14

BookInfo – Zipkin dashboard

3

BookInfo – Zipkin trace

4

BookInfo – generated graph

Docker on AWS , tweaking kernel params

Few months before we came up with an idea to scale our Jenkins infrastructure by dynamically launching new containers on AWS cloud in order to build and verify pull requests. It would help our developers to avoid long waiting time (on average 4-6 h!!!) before their pull requests could be merged, since we had limited number of Jenkins slaves and every time we wanted to add a new slave, our Ops have to go all over again to set up a new Jenkins slave.

From developer’s point of view, it means packing one really big and fat monolithic app (along with all its dependencies like messaging queues, database server, configuration management, backend API, validation API, etc.) on a docker container.  All those services need to run locally inside a container, that way we could scale on demand, i.e. launch N number of such containers depending upon the number of pull requests. Hence, no waiting time (as soon as the pull request arrives, Jenkins can start building  it).

We first started development locally on our laptops and after battling for weeks, everything worked like a charm on developer’s machines.

Then we tried to launch our docker container on AWS cloud. We thought that everything would work, since we had already built and tested docker containers locally on multiple flavors of Linux versions running different kernel versions, but the reality was quite the opposite, after launching the container it just halted for some strange reasons.

giphy

 

After investigating, we found the culprits were kernel parameters, to be precise, too small shmmax and shmall parameters . You may ask, why a docker container required more shmmax and shmall. Remember our big and fat monolithic application ? It required a lot of dependencies to be installed inside the container, few of those dependencies required access to set bigger values for shmmax and shmall in init-scripts.

In order to run those applications, we were forced to launch docker in privileged mode like this

docker run --privileged -it someDockerImage

After that, we could run those applications without modifying init-scripts.

Note: Although the above solution works, if possible one should really avoid launching containers in privileged mode because root inside docker container == root on host.

Upgrade docker to latest version on mac

Docker toolbox install number of tools for us, such as:

  • Docker Client – Use to connect with docker daemon
  • Docker Machine – To run docker-machine binary
  • Docker Engine – To run docker binary
  • Docker Compose – To run docker-compose binary
  • Docker Kitematic – Docker GUI
  • VirtualBox – To boot virtual machine

To upgrade all dependencies to their latest versions, download docker-toolbox and install it (double click, and follow installation instruction).

If you are interested in upgrading docker’s version on a particular virtual machine, which is running docker daemon, then use docker-machine.

To brush up some basic knowledge, we can’t execute docker natively on mac (remember docker daemon needs access to Linux-specific kernel features), that’s where docker-machine comes into action. We can use docker-machine to both create and attach to Linux virtual machine, which can provide the host environment for running docker.

First, we need to find machines’ s name on which we want to upgrade docker’s version. We can execute  docker-machine ls to list all machines. E.g.

docker-machine-ls

In my case, I want to upgrade docker’s version on “default” machine. In order to do it, execute docker-machine upgrade <yourMachineName>

docker-machine-upgrade-default

Note: make sure the machine that you want to upgrade is up and running, otherwise you will get an error like this.

docker-machine-upgrade-error

That’s it, docker is upgraded to the latest version, ssh to docker machine by executing docker-machine ssh <yourMachineName> and you will see the latest docker version.

docker-machine-upgraded