Jenkins and kubernetes are a great combination. I love the concept of leveraing container technology to provide any build tools that you might require. However in many cases the examples provided are not always the best or most secure.

In Section 10.2.2 Building your image of Mannings Microservices in Action the following Jenkins Pipeline script is suggested:

def withPod(body) {
  podTemplate(label: 'pod', serviceAccount: 'jenkins', containers: [
      containerTemplate(name: 'docker', image: 'docker', command: 'cat', ttyEnabled: true),
      containerTemplate(name: 'kubectl', image: 'morganjbruce/kubectl', command: 'cat', ttyEnabled: true)
    ],
    volumes: [
      hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
    ]
 ) { body() }
}

withPod {
 node('pod') {
    def tag = "${env.BRANCH_NAME}.${env.BUILD_NUMBER}"
    def service = "market-data:${tag}"
    checkout scm
    container('docker') {
      stage('Build') {
        sh("docker build -t ${service} .")
      }
    }
    container('kubectl') {
      stage('Deploy') {
        sh("kubectl --namespace=staging apply -f deploy/staging/")    
      }
    }
  }
}

In this script there are three things that bother me:

  1. Mounting the docker socket
  2. Depending on docker
  3. The kubectl image

In this blog I’ll explain the issue with each of these three points and I’ll provide a solution that solves these issues.

Mounting the docker socket is bad

Docker runs a root also the owner of /var/run/docker.sock is root. This means that anyone with access to that file can get root access.

An exploit can be pretty simple as seen on Stack Overflow. Thanks cyphar!

The danger of using this with Jenkins is also explicitly mentioned by Peterm Benjamin on dev.to

Kubernetes without Docker

With modern kubernetes configurations it might very well be the case that Docker is not installed on any of the nodes. This has a simple reason. You only need a Container Runtime Interface (CRI). Read more about this on kubernetes.io

One of the most widely use implementations is Containerd. Containerd in fact is part of Docker!

Mounting the docker socket without Docker installed will just result in errors like:

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Pick your docker images

Pick your dependencies wisely! This also goes for docker images. I always check for two things:

  1. Is the sourcecode available?
  2. Is the image activly maintained?

morganjbruce/kubectl is a no on both checks.

It appears the one of the authors of the book just made this image to accompany the book. There is no Dockerfile and it does not seem to be actively maintained. The book does a good job in explaining the concept. But this is not a sutainable solution.

A better solution

Let’s start with the Docker depenceny and security ussue. The are several tools to build Docker images. The following requirements make that list a bit shorter:

  • Run as container on kubernetes
  • Run unprivileged

I think Kaniko is the most promising. I have not tried img but will definitely do so another time.

My solution for the kubectl image is to take one of the list below. These have sources available and seem to be updated regulary.

  • https://hub.docker.com/r/bitnami/kubectl
  • https://hub.docker.com/r/lachlanevenson/k8s-kubectl
  • https://hub.docker.com/r/nwwz/kubectl (mine)
  • Buid your own

Combining Kaniko and another kubectl image into a better solution:

def withPod(body) {
  podTemplate(serviceAccount: 'jenkins', containers: [
      containerTemplate(name: 'kaniko', image: 'gcr.io/kaniko-project/executor:debug', command: '/busybox/cat', ttyEnabled: true),
      containerTemplate(name: 'kubectl', image: 'nwwz/kubectl:debug-v1.18', command: 'cat', ttyEnabled: true)
    ],
    volumes: [
	  secretVolume(secretName: 'registrycred', mountPath: '/cred')
    ]
 ) { body() }
}

withPod {
  node(POD_LABEL) {
    checkout scm
    container('kaniko') {
      stage('Build image') {
        sh("cp /cred/.dockerconfigjson /kaniko/.docker/config.json")
        sh("executor --context=`pwd` --dockerfile=`pwd`/Dockerfile --destination=${service} --single-snapshot")
	  }
    }
    container('kubectl') {
      stage('Deploy') {
        sh("kubectl --namespace=staging apply -f deploy/staging/")    
      }
    }
  }
}

Please note that the script not only builds the image but also pushes it into a registry. Therefore add your credentials to the kubernetes cluster using this command:

kubectl create secret docker-registry registrycred -n build-env --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-password> --docker-email=<your-email>

registrycred must match the name use in your Jekinsfile and the namespace must match the namespace that Jenkins uses.

Conclusion

Examples are often great but they serve a purpose. I guess the purpose of Mannings Microservices in Action is more a conceptual exampe than it is a production worthy example. In any case examples are just examples and should used as such. The proposed solution is no exception. It’s merely an example of what could be done better.

But always be cautious about potential security issue and always check docker images that you depend on.