Automatic DNS and SSL on Kubernetes with LetsEncrypt – Part 2

TL;DR Part 2 of how to make your Kubernetes cluster super awesome by adding two pods which automatically handle public DNS registration and SSL certs for any deployment you choose! Reduces the complexity of deployments and reduces manual extra tasks.

Part one recap.

In part one we discussed the advantages of the Kubernetes Ingress Controller and configured our cluster to automatically register the public IP’s of ingress controllers into AWS Route53 DNS using an annotation.

Go and catch up HERE if you missed it!

TLS / SSL

Now it’s time to add the really fun stuff, we already know what subdomain we want to register with our ingress controller (you did read part one right?), so we have all the information we need to automatically configure SSL for that domain as well!

There’s something awesome about deploying an app to Kubernetes, browsing to the URL you configured and seeing a happy green BROWSER VALID SSL connection already setup.

Free, Browser Valid SSL Certificates…. as long as you automate!

If you haven’t heard of LetsEncrypt, then this blog post is going to give you an extra bonus present. Lets encrypt is a browser trusted certificate authority, which charges nothing for it’s certs and is also fully automated!

This means; Code can request a domain cert, prove to the certificate authority that the server making the request actually owns the domain (by placing specific content on the webserver for the CA to check) and then receive the valid certificate back, all via the LetsEncrypt API.

If you want to know how this all works, visit the Lets Encrypt – How It Works Page..

Lets Encrypt Automated Proof of Ownership
Lets Encrypt Automated Proof of Ownership

Without LetsEncrypt, this process would have manual steps for validation as with most other CA’s and potentially no API for requesting certs at all. We really must thank their efforts for making all this possible.

Using Lets Encrypt with Kubernetes Ingress Controllers

Much like with the automatic DNS problem, google returns more questions than solutions, with different bits of projects and GitHub issues suggesting a number of paths. This blog post aims to distill all of my research and what worked for me.

After testing a few different things, I found that a project called Kube-Lego did exactly what I wanted;

  • Supports configuring both GCE Ingress Controllers and NginX ingress controllers with LetsEncrypt Certs (I’m using GCE in this example).
  • Supports automatic renewals and the automated proof of ownership needed by LetsEncrypt.

Another reason I liked kube-lego, is that it’s standalone. The LetsEncrypt code isn’t embedded in the LoadBalancer (Ingress Controller) code itself, this would have caused me problems:

  1. I’m using Googles’ GCE loadbalancers so I have no access to their code anyway.
  2. Even if I was running my own NginX/Caddy/Etc ingress controller pods, If LetsEncrypt was embedded, I’d need to write some clustering logic in order to have more than one instance of them running, otherwise all of them would race each other to get a cert for the same domain and i’d end up in a mess (and rate limited from the LetsEncrypt API).

KubeLego seemed like the most flexible choice.

Installing KubeLego

Installation is pretty simple, as the documentation at https://github.com/jetstack/kube-lego was much better than the dns-controller from Part 1 of this article.

Firstly, we configure a ConfigMap that the kube-lego pod will get settings from, i’ve saved this as kube-lego-config-map.yaml

Now we need a deployment manifest for the kube-lego app itself, i’ve saved this as kube-lego.yaml

Notice our Deployment references our configmap to pull settings for e-mail and API endpont. Also notice the app exposes port 8080, more on that later!

We can now deploy (both configmap and app) onto our k8’s cluster:

Voila! We’re running kube-lego on our cluster.

Testing Kube-lego

You can view the logs to see what kube-lego is doing, by default it will listen for all new ingress-controllers and take action with certs if they have certain annotations which we’ll cover below.

Also, if the application fails to start for whatever reason, the health check in the deployment manifest above will fail and Kubernetes will restart the pod.

your pod name will differ for the logs command:

Putting it all together

Here we are going to create an app deployment for which we want all this magic to happen!

However, there is a reason automatic DNS registration was part one of this blog series.. LetsEncrypt validation depends on resolving the domain requested down to our K8’s cluster, so if you haven’t enabled automatic DNS (or put the ingress controllers public IP in DNS yourself), then LetsEncrypt will never be able to validate ownership of the domain and therefore never give you a certificate!

May be worth revisiting part1 of this series if you haven’t already (it’s good, honest!)

App Deployment manifests

If you’re familiar with Kubernetes, then you’ll recognise the following manifests simply deploy us a nginx sample ‘application’ in a new namespace. The differences to enable DNS and SSL are all in the ingress controller definition.

namespace.yaml Creates our new namespace:

nginx.yaml  Deploys our application:

service.yaml  is needed to track active backend endpoints for the Ingress Controller (notice it’s of type: NodePort, it’s not publicly exposed)

Finally, our Ingress Controller. ingress-tls.yaml  I’ve highlighted the ‘non standard’ bits which enable our automated magic.

Lets deploy these manifests to our Kubernetes cluster and watch the magic happen!

Right, now our ingress controller is going to go and configure a GCE load balancer for us (standard), this will be allocated a public IP and our dns-controller will register this against our hostname in Route53:

And looking in our AWS Route53 portal:

Route53 showing Updated DNS Records
Route53 showing Updated DNS Records

Excellent!

While this was happening. kube-lego was also configuring the GCE loadbalancer to support LetsEcrypt’s ownership checks. We look at the LoadBalancer configuration in Google’s cloud console and see that a specific URL path has been configured to point to the kube-lego app on 8080.

This allows kube-lego to control the validation requests for domain ownership that will come in from LetsEncrypt when we request a certificate. All other request paths will be passed to our actual app.

LetsEncrypt configuration on ingress LB via kube-lego
Kube-lego adds configuration to the ingress loadbalancer to pass LetsEncrypt ownership challenges

This will allow the kube-lego process (requesting certs via LetsEncrypt) to succeed:

As soon as a valid cert is received, kube-lego re-configures the GCE LoadBalancer for HTTPS as well as HTTP (notice in the above screenshot, only Protocol HTTP is enabled on the LB when it is first created).

Kube-lego configures SSL certificate into GCE's ingress load balancer
Kube-lego configures SSL certificate into GCE’s ingress load balancer

The Result

The whole process above takes a couple of mins to complete (LB getting a public IP, DNS Registration, LetsEncrypt Checks, Get Cert, Configure LB with SSL) but then… Huzzah! Completley hands off publicly available services, protected by valid SSL certs!

Now your developers can deploy applications which are SSL by default without any extra hassle.

Appreciate any corrections, comments of feedback, please direct to @mattdashj on twitter.

Until next time!

Matt

Automatic DNS and SSL on Kubernetes with LetsEncrypt – Part 1

TL;DR Howto make your Kubernetes cluster super awesome by adding two pods which automatically handle public DNS registration and SSL certs for any deployment you choose! Reduces the complexity of deployments and reduces manual extra tasks.

Overview

Kubernetes Ingress controllers provide developers an API for creating HTTP/HTTPS (L7) proxies in front of your applications, something that historically we’ve done ourselves; Either inside our application pod’s with our apps, or more likley, as a separate set of pods infront of our application, strung together with Kubernetes Service’s (L4).

Without Ingress Controller

Public kubernetes service flow without ingress controllers

With Ingress Controller

Ingress controller simplifying k8s services

Technically, there is still a Service in the background to track membership, but it’s not in the “path of traffic” as it is in the first diagram.

Whats more, Ingress controllers are pluggable, a single Kubernetes API to developers, but any L7 load balancer in reality, be it Nginx, GCE, Treifik, or Hardware… Excellent.

However, there are some things Ingress controllers *DONT* do for us, and that is what I want to tackle today…

  1. Registering our ingress loadbalancer in public DNS with a useful domain name.
  2. Automatically getting SSL/TLS certificates for our domain and configuring them on our Ingress load balancer.

With these two additions developers can deploy their application to K8’s, and automatically have it accessible and TLS secured.. Perfect!

DNS First

DNS is fairly simple, yet a google search for this topic makes it sound anything but. Lots of different suggestions, github issues, half-started projects.

All we want it something to listen for new Ingress Controllers, find the public IP given to the new Ingress loadbalancer and update DNS with the apps DNS name and loadbalancer IP.

After some research, code exists to do exactly what we want, it’s called ‘dns-controller’ and it’s now part of the ‘kops’ codebase from the cluster-ops SIG. It currently updates AWS Route53, but thats fine, as it’s what i’m using anyway.

https://github.com/kubernetes/kops/tree/master/dns-controller

However, the documentation is slim and unless you’re using KOPS, it’s not packaged in a useful way. Thankfully, someone has already extracted the dns-controller pieces and packaged them in a docker container for us.

The security guy in me points out: If you’re looking at anything more than testing, i’d strongly recommend packaging the DNS-Controller code yourself so you know 100% whats in it.

DNS – Here’s how to deploy (1/2)

Create the following deployment.yaml manifest

This pulls down our pre-packaged dns-controller code and runs it on our cluster. By default i’ve placed this in the kube-system namespace.

The code needs to change AWS Route53 DNS entries *duh*, so it also needs AWS Credentials.

(I recommend using AWS IAM to create a user with ONLY the access to the Route53 zone you need this app to control. Don’t give it your developer keys, anyone in your K8’s cluster could potentially read them)

When we’ve got our credentials, create a secret with your AWS credentials file in it as follows..

we’ve The path to your AWS credentials file will differ. If you don’t have a credentials file, it’s a simple format as shown below.

Now deploy your dns-controller into K8’s with  kubectl create -f deployment.yaml

You can query the applications logs to see it working, by default it will try to update any DNS domain it finds configured for an Ingress controller with a matching zone in Route53.

Example log output:

You will see errors here if it cannot find your AWS credentials (check your secret) and that the credentials are valid!

Using our new automated DNS service.

Right! How do we use it? This is an ingress controller without automatic DNS..

And this is one WITH our new automatic DNS registration..

Simply add the annotation  dns.alpha.kubernetes.io/external: "true" to any ingress controller and our new dns-controller will try to add the domain listed under  - host: app.ourdomain.com  to DNS with the public IP of the Ingress controller.

Try it out! My cluster is on GCE (a GKE cluster), we’re using the google load balancers. I’m noticing they take around ~60 seconds to get a public IP assigned, so DNS can take ~90-120 seconds to be populated. That said, I don’t need to re-deploy my ingress controllers with my software deployments, so this is acceptable for me.

In the next section, we’ll configure automatic SSL certificate generation and configuration for our GCE load balancers!

Go read part two now, CLICK HERE.

Comments or suggestions? Please find me on twitter @mattdashj