Jenkins X — Securing the Cluster

Steve Boardwell
ITNEXT
Published in
5 min readJun 22, 2019

--

Photo by Jose Fontano on Unsplash

Jenkins X is a great tool for quickly creating CI/CD pipelines on Kubernetes. However, the convenience of being able to set things so quickly also came with a couple of downsides. In a series of posts, I’ll discuss these and offer my take on how we solved some of them.

Before we start, I’d like to give a quick shout out to Ilya Shaisultanov, my counterpart across the water. Having colleagues like him make adventures such as these so much more enjoyable :-).

When creating the initial CI/CD infrastructure for Datameer, our big-data analytics company, we started out with the following goals:

  • VPC: anything private should be put behind a firewall of some kind.
  • VPN: if services are publicly available, we need to restrict who can access them.
  • TLS: because, to be honest, there is no reason why anyone should need to serve anything through http today.
  • OAuth: using a static Jenkins, we wanted to allow the developers to sign-in using their account details rather than creating additional separate users within Jenkins.
  • Scripted: we wanted to keep the manual steps to a bear minimum.

Jenkins X provided some of these out of the box. But in order to have 100% coverage, a few tweaks were needed.

Securing the Cluster — Take #1

First on the list was securing our GKE cluster. Jenkins X provides the option of creating the cluster automatically with:

jx create cluster gke

Unfortunately, GKE’s default settings currently leave the master accessible to the world, meaning we needed to create our own secured cluster in advance. As a side note, GKE will be defaulting to VPC native in the near future according to the release notes, but for now we needed to create one explicitly.

There are many good resources on the net like this one for setting up the cluster manually or using terraform modules. We used terraform to create our cluster.

However you do it, the important takeaways here are:

  • you will be able to restrict access to the master using authorised networks only, limiting access to internal networks or externally through a VPN.
  • you will be able to allocate IP ranges for worker nodes, pods, and services. This could be used to ensure the allocated IP ranges do not clash with your internal network should you wish to peer the two. Google provides a guide to sizing your cluster.

With the cluster now secured and accessible through trusted networks only, it was time to install Jenkins X.

UPDATE — 25th June 2019:
Thanks to Hays Clark for this reminder :-). If creating a private cluster on GKE, you will not be able to pull images from dockerhub, etc, due to your cluster being, well, private. To fix this, simply add a Google Cloud NAT as per this post.

Securing the Cluster — Take #2

The folks over at Cloudbees have done a great job making installing Jenkins X as simple as it gets with sensible defaults. However, in order to be able to use *.domain.xyz wildcard certificates later on in the process, we needed to make two decisions:

  • we need to use a real domain (for cert-managers DNS challenge)
  • we need to change the urltemplate of the exposed services from
    "{{.Service}}.{{.Namespace}}.{{.Domain}}"
    to
    "{{.Service}}-{{.Namespace}}.{{.Domain}}"
    which would have meant a new sub-domain for every namespace.

Our install command looked something like:

jx install - provider=gke \
- git-username=${github_username} \
- git-api-token=${github_api_token} \
- default-admin-password=${default_admin_password} \
- version=${jx_version} \
- no-default-environments=true \
- git-private=true \
- no-tiller=false \
- domain=${jx_domain} \
- long-term-storage=false \
- exposecontroller-urltemplate='"{{.Service}}-{{.Namespace}}.{{.Domain}}"' \
- urltemplate='"{{.Service}}-{{.Namespace}}.{{.Domain}}"' \
- buildpack=kubernetes-workloads \
- batch-mode=true

IMPORTANT: During the installation the jxing-nginx-ingress-controller service will be created.

You will need to create DNS records (Type A)pointing to:

  • YOUR-DOMAIN > loadbalancer ip
  • *.YOUR-DOMAIN > loadbalancer ip

You can find the loadbalancer ip with:

kubectl get svc jxing-nginx-ingress-controller -n kube-system -o'jsonpath={ .status.loadBalancer.ingress[0].ip }'

Within just a few minutes we had a shiny new Jenkins X platform up and running with Jenkins, Nexus, and a multitude of other services running in our secured cluster.

But hang on, the services are http only!?!

Fear not, a simple jx upgrade ingress performs all the necessary steps to upgrade your ingress to https on our domain — you can read more about this in this post by Viktor Farcic.

A few minutes later and https was available. Nice!

But hang on, all my services are open to the world!?!

Ah, of course! Restricting access to the master only prevents people accessing the cluster through kubectl. Any ingresses created are open per default.

Restricting access to the services

After weighing up our options, the most practical solution was to address the problem at the source, the “jxing-nginx-ingress-controller”.

We made use of the whitelist-source-range setting from nginx, which says:

You can specify allowed client IP source ranges through the nginx.ingress.kubernetes.io/whitelist-source-range annotation. The value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0..1,....

To configure this setting globally for all Ingress rules, the whitelist-source-range value may be set in the NGINX ConfigMap.

Pro-Tip #1: for webhooks to work, the GitHub servers need to be whitelisted as well.
You can find them here.

Pro-Tip #2: don’t forget to add the IP range used for your pods (see the VPC section above).
Otherwise you will find that your own pods can not access service URLs within the same cluster (2 hours of debugging until I found this one 😅 ).

So, patch the jxing-nginx-ingress-controller can be done with:

kubectl patch configmap/jxing-nginx-ingress-controller \
— type merge \
-p '{"data" : {"whitelist-source-range" : "...CIDRS..."}}' \
-n kube-system

Damn! Didn’t work!

It turns out, that for this to work properly, there was one more thing needed for GKE. In order to allow authorised IP ranges, we needed to preserve the client source ip.

Luckily, this was just another simple patch.

kubectl patch svc jxing-nginx-ingress-controller \
-p '{"spec":{"externalTrafficPolicy":"Local"}}' \
-n kube-system

Success!

Without VPN

With both our cluster and Jenkins X services packed securely away behind a list of whitelisted IP ranges, it was time look at the OAuth integration.

In the next few posts I’ll be going through further adventures including:

  • managing a Jenkins server, configuration, and jobs
  • introducing wildcard certificates for https enabled preview environments
  • managing custom nexus repositories and configuration
  • …possibly other things such as integrating GitVersion to track our app versions using GitFlow. See how we get with that first though :-)

Until the next episode...

--

--