Measuring internet consistency with speedtest.net, plotly and docker

While measuring latency of your internet link over time is as simple as something like this;

I couldn’t find anything which would give a good measure of bandwidth consistency over time.

Now this is understandable, you need to use bandwidth to properly measure bandwidth (especially if you don’t run the infrastructure), so it’s a less useful tool in the regular network engineers arsenal, with useful tools such as Speedtest.NET designed for a one-off, point in time check of internet speed.

But with considerable oversubscription common amongst home ISP’s, I was looking for a way to graph trends in my internet connectivity, at an interval which wouldn’t cause the network issues, but would give me a lot more insight than the occasional, manual test.

This is when I stumbled upon https://github.com/sivel/speedtest-cli; An excellent python implementation of the web-browser client for the Speedtest.NET. Suddenly, we have a decent command line interface to Speedtest.NET!

There are even options to output the results to CSV or JSON, ideal for most!

But what if we want this to be truly hands off monitoring? Enter Plot.ly and Docker!

Plotly gives us an API and python bindings to easily submit and update data to a plotly hosted graph… for free (as long as the graphs are public, which is cool).

So I added some basic plotly support.

Gives us a graph in plotly like this…

Run it again; you get a second data point.

Enter docker… and we can package this up into a one-hit measurement command which will run anywhere, adding each new dataset (with timestamp) to the graph.

You can run this yourself with one command;

All you’ll need is a plot.ly API key (which you see us passing into the container with the  -v directory mount).

I’ve added more plotly info to the readme in the PR to speedtest-cli for those that want to know more;

https://github.com/matjohn2/speedtest-cli/blob/dbb1e9ae1e295920141377be4649a6505b897b01/README.rst

And here’s my live graph (click for live).

 

 

 

 

Automatic DNS and SSL on Kubernetes with LetsEncrypt – Part 2

TL;DR Part 2 of how to make your Kubernetes cluster super awesome by adding two pods which automatically handle public DNS registration and SSL certs for any deployment you choose! Reduces the complexity of deployments and reduces manual extra tasks.

Part one recap.

In part one we discussed the advantages of the Kubernetes Ingress Controller and configured our cluster to automatically register the public IP’s of ingress controllers into AWS Route53 DNS using an annotation.

Go and catch up HERE if you missed it!

TLS / SSL

Now it’s time to add the really fun stuff, we already know what subdomain we want to register with our ingress controller (you did read part one right?), so we have all the information we need to automatically configure SSL for that domain as well!

There’s something awesome about deploying an app to Kubernetes, browsing to the URL you configured and seeing a happy green BROWSER VALID SSL connection already setup.

Free, Browser Valid SSL Certificates…. as long as you automate!

If you haven’t heard of LetsEncrypt, then this blog post is going to give you an extra bonus present. Lets encrypt is a browser trusted certificate authority, which charges nothing for it’s certs and is also fully automated!

This means; Code can request a domain cert, prove to the certificate authority that the server making the request actually owns the domain (by placing specific content on the webserver for the CA to check) and then receive the valid certificate back, all via the LetsEncrypt API.

If you want to know how this all works, visit the Lets Encrypt – How It Works Page..

Lets Encrypt Automated Proof of Ownership
Lets Encrypt Automated Proof of Ownership

Without LetsEncrypt, this process would have manual steps for validation as with most other CA’s and potentially no API for requesting certs at all. We really must thank their efforts for making all this possible.

Using Lets Encrypt with Kubernetes Ingress Controllers

Much like with the automatic DNS problem, google returns more questions than solutions, with different bits of projects and GitHub issues suggesting a number of paths. This blog post aims to distill all of my research and what worked for me.

After testing a few different things, I found that a project called Kube-Lego did exactly what I wanted;

  • Supports configuring both GCE Ingress Controllers and NginX ingress controllers with LetsEncrypt Certs (I’m using GCE in this example).
  • Supports automatic renewals and the automated proof of ownership needed by LetsEncrypt.

Another reason I liked kube-lego, is that it’s standalone. The LetsEncrypt code isn’t embedded in the LoadBalancer (Ingress Controller) code itself, this would have caused me problems:

  1. I’m using Googles’ GCE loadbalancers so I have no access to their code anyway.
  2. Even if I was running my own NginX/Caddy/Etc ingress controller pods, If LetsEncrypt was embedded, I’d need to write some clustering logic in order to have more than one instance of them running, otherwise all of them would race each other to get a cert for the same domain and i’d end up in a mess (and rate limited from the LetsEncrypt API).

KubeLego seemed like the most flexible choice.

Installing KubeLego

Installation is pretty simple, as the documentation at https://github.com/jetstack/kube-lego was much better than the dns-controller from Part 1 of this article.

Firstly, we configure a ConfigMap that the kube-lego pod will get settings from, i’ve saved this as kube-lego-config-map.yaml

Now we need a deployment manifest for the kube-lego app itself, i’ve saved this as kube-lego.yaml

Notice our Deployment references our configmap to pull settings for e-mail and API endpont. Also notice the app exposes port 8080, more on that later!

We can now deploy (both configmap and app) onto our k8’s cluster:

Voila! We’re running kube-lego on our cluster.

Testing Kube-lego

You can view the logs to see what kube-lego is doing, by default it will listen for all new ingress-controllers and take action with certs if they have certain annotations which we’ll cover below.

Also, if the application fails to start for whatever reason, the health check in the deployment manifest above will fail and Kubernetes will restart the pod.

your pod name will differ for the logs command:

Putting it all together

Here we are going to create an app deployment for which we want all this magic to happen!

However, there is a reason automatic DNS registration was part one of this blog series.. LetsEncrypt validation depends on resolving the domain requested down to our K8’s cluster, so if you haven’t enabled automatic DNS (or put the ingress controllers public IP in DNS yourself), then LetsEncrypt will never be able to validate ownership of the domain and therefore never give you a certificate!

May be worth revisiting part1 of this series if you haven’t already (it’s good, honest!)

App Deployment manifests

If you’re familiar with Kubernetes, then you’ll recognise the following manifests simply deploy us a nginx sample ‘application’ in a new namespace. The differences to enable DNS and SSL are all in the ingress controller definition.

namespace.yaml Creates our new namespace:

nginx.yaml  Deploys our application:

service.yaml  is needed to track active backend endpoints for the Ingress Controller (notice it’s of type: NodePort, it’s not publicly exposed)

Finally, our Ingress Controller. ingress-tls.yaml  I’ve highlighted the ‘non standard’ bits which enable our automated magic.

Lets deploy these manifests to our Kubernetes cluster and watch the magic happen!

Right, now our ingress controller is going to go and configure a GCE load balancer for us (standard), this will be allocated a public IP and our dns-controller will register this against our hostname in Route53:

And looking in our AWS Route53 portal:

Route53 showing Updated DNS Records
Route53 showing Updated DNS Records

Excellent!

While this was happening. kube-lego was also configuring the GCE loadbalancer to support LetsEcrypt’s ownership checks. We look at the LoadBalancer configuration in Google’s cloud console and see that a specific URL path has been configured to point to the kube-lego app on 8080.

This allows kube-lego to control the validation requests for domain ownership that will come in from LetsEncrypt when we request a certificate. All other request paths will be passed to our actual app.

LetsEncrypt configuration on ingress LB via kube-lego
Kube-lego adds configuration to the ingress loadbalancer to pass LetsEncrypt ownership challenges

This will allow the kube-lego process (requesting certs via LetsEncrypt) to succeed:

As soon as a valid cert is received, kube-lego re-configures the GCE LoadBalancer for HTTPS as well as HTTP (notice in the above screenshot, only Protocol HTTP is enabled on the LB when it is first created).

Kube-lego configures SSL certificate into GCE's ingress load balancer
Kube-lego configures SSL certificate into GCE’s ingress load balancer

The Result

The whole process above takes a couple of mins to complete (LB getting a public IP, DNS Registration, LetsEncrypt Checks, Get Cert, Configure LB with SSL) but then… Huzzah! Completley hands off publicly available services, protected by valid SSL certs!

Now your developers can deploy applications which are SSL by default without any extra hassle.

Appreciate any corrections, comments of feedback, please direct to @mattdashj on twitter.

Until next time!

Matt

Automatic DNS and SSL on Kubernetes with LetsEncrypt – Part 1

TL;DR Howto make your Kubernetes cluster super awesome by adding two pods which automatically handle public DNS registration and SSL certs for any deployment you choose! Reduces the complexity of deployments and reduces manual extra tasks.

Overview

Kubernetes Ingress controllers provide developers an API for creating HTTP/HTTPS (L7) proxies in front of your applications, something that historically we’ve done ourselves; Either inside our application pod’s with our apps, or more likley, as a separate set of pods infront of our application, strung together with Kubernetes Service’s (L4).

Without Ingress Controller

Public kubernetes service flow without ingress controllers

With Ingress Controller

Ingress controller simplifying k8s services

Technically, there is still a Service in the background to track membership, but it’s not in the “path of traffic” as it is in the first diagram.

Whats more, Ingress controllers are pluggable, a single Kubernetes API to developers, but any L7 load balancer in reality, be it Nginx, GCE, Treifik, or Hardware… Excellent.

However, there are some things Ingress controllers *DONT* do for us, and that is what I want to tackle today…

  1. Registering our ingress loadbalancer in public DNS with a useful domain name.
  2. Automatically getting SSL/TLS certificates for our domain and configuring them on our Ingress load balancer.

With these two additions developers can deploy their application to K8’s, and automatically have it accessible and TLS secured.. Perfect!

DNS First

DNS is fairly simple, yet a google search for this topic makes it sound anything but. Lots of different suggestions, github issues, half-started projects.

All we want it something to listen for new Ingress Controllers, find the public IP given to the new Ingress loadbalancer and update DNS with the apps DNS name and loadbalancer IP.

After some research, code exists to do exactly what we want, it’s called ‘dns-controller’ and it’s now part of the ‘kops’ codebase from the cluster-ops SIG. It currently updates AWS Route53, but thats fine, as it’s what i’m using anyway.

https://github.com/kubernetes/kops/tree/master/dns-controller

However, the documentation is slim and unless you’re using KOPS, it’s not packaged in a useful way. Thankfully, someone has already extracted the dns-controller pieces and packaged them in a docker container for us.

The security guy in me points out: If you’re looking at anything more than testing, i’d strongly recommend packaging the DNS-Controller code yourself so you know 100% whats in it.

DNS – Here’s how to deploy (1/2)

Create the following deployment.yaml manifest

This pulls down our pre-packaged dns-controller code and runs it on our cluster. By default i’ve placed this in the kube-system namespace.

The code needs to change AWS Route53 DNS entries *duh*, so it also needs AWS Credentials.

(I recommend using AWS IAM to create a user with ONLY the access to the Route53 zone you need this app to control. Don’t give it your developer keys, anyone in your K8’s cluster could potentially read them)

When we’ve got our credentials, create a secret with your AWS credentials file in it as follows..

we’ve The path to your AWS credentials file will differ. If you don’t have a credentials file, it’s a simple format as shown below.

Now deploy your dns-controller into K8’s with  kubectl create -f deployment.yaml

You can query the applications logs to see it working, by default it will try to update any DNS domain it finds configured for an Ingress controller with a matching zone in Route53.

Example log output:

You will see errors here if it cannot find your AWS credentials (check your secret) and that the credentials are valid!

Using our new automated DNS service.

Right! How do we use it? This is an ingress controller without automatic DNS..

And this is one WITH our new automatic DNS registration..

Simply add the annotation  dns.alpha.kubernetes.io/external: "true" to any ingress controller and our new dns-controller will try to add the domain listed under  - host: app.ourdomain.com  to DNS with the public IP of the Ingress controller.

Try it out! My cluster is on GCE (a GKE cluster), we’re using the google load balancers. I’m noticing they take around ~60 seconds to get a public IP assigned, so DNS can take ~90-120 seconds to be populated. That said, I don’t need to re-deploy my ingress controllers with my software deployments, so this is acceptable for me.

In the next section, we’ll configure automatic SSL certificate generation and configuration for our GCE load balancers!

Go read part two now, CLICK HERE.

Comments or suggestions? Please find me on twitter @mattdashj

OpenStack infrastructure automation with Terraform – Part 2

TL;DR: Second of a two post series looking at automation of an openstack project with Terraform, using the new Terraform OpenStack Provider.

With the Openstack provider for Terraform being close to accepted into the Terraform release, it’s time to unleash it’s power on the Cisco Openstack-based Cloud..

In this post, we will:

  • Write a terraform ‘.TF’ file to describe our desired deployment state including;
    • Neutron networks/subnets
    • Neutron gateways
    • Keypairs and Security Groups
    • Virtual machines and Volumes
    • Virtual IP’s
    • Load balancers (LBaaS).
  • Have terraform deploy, modify and rip down our infrastructure.

If you don’t have the terraform openstack beta provider available, you’ll want to read Part 1 of this series.

Terraform Intro

Terraform “provides a common configuration to launch infrastructure“. From IaaS instances and virtual networks to DNS entries and e-mail configuration.

The idea being that a single Terraform deployment file can leverage multiple providers to describe your entire application infrastructure in one deployment tool; even if your DNS, LB and Compute resources come from three different providers.

Support for different infrastructure types is supported by provider modules, it’s the Openstack provider we’re focused on testing here.

If you’re not sure why you want to use Terraform, you’re probably best getting off here and having a look around Terraform.io first!

Terraform Configuration Files

Terraform configuration files describe your desired infrastructure state, built up of multiple resources, using one or more providers.

Configuration files are a custom, but easy to read format with a .TF extension. (They can also be written in JSON for machine generated content.)

Generally, a configuration file will hold necessary parameters for any providers needed, followed by a number of resources from those providers.

Below is a simple example with one provider (Openstack) and one resource (an SSH public key to be uploaded to our Openstack tenant)

Save the above as  demo1.tf and replace the following placeholders with your own Openstack environment login details.

Now run $terraform plan  in the same directory as your demo1.tf  file. Terraform will tell you what it’s going to do (add/remove/update resources), based on checking the current state of the infrastructure:

Terraform checks, the keypair doesn’t already exist on our openstack provider, so a new resource is going to be created if we apply our infrastructure… good!

Terraform Apply!

Success! At this point you can check Openstack to confirm our new keypair exists in the IaaS:

 

Terraform State

Future deployments of this infrastructure will check the state first, running $terraform plan  again shows no changes, as our single resource already exists in Openstack.

That’s basic terraform deployment covered using the openstack provider.

Adding More Resources

The resource we deployed above was ‘ openstack_compute_keypair_v2 ‘. Resource types are named by the author of a given plugin! not centrally by terraform (which means TF config files are not re-usable between differing provider infrastructures).

Realistically this just means you need to read the doc of the provider(s) you choose to use.

Here are some openstack provider resource types we’ll use for the next demo:

“openstack_compute_keypair_v2”
“openstack_compute_secgroup_v2”
“openstack_networking_network_v2” 
“openstack_networking_subnet_v2”
“openstack_networking_router_v2”
“openstack_networking_router_interface_v2”
“openstack_compute_floatingip_v2”
“openstack_compute_instance_v2”
“openstack_lb_monitor_v1”
“openstack_lb_pool_v1”
“openstack_lb_vip_v1”

If you are familiar with Openstack, then their purpose should be clear!

The following Terraform configuration will build on our existing configuration to:

  • Upload a keypair
  • Create a security group
    • SSH and HTTPS in, plus all TCP in from other VM’s in same group.
  • Create a new Quantum network and Subnet
  • Create a new Quantum router with an external gateway
  • Assign the network to the router (router interface)
  • Request two floating IP’s into our Openstack project
  • Spin up three instances of CentOS7 based on an existing image in glance
    • With sample metadata provided in our .tf configuration file
    • Assigned to the security group terraform created
    • Using the keypair terraform created
    • Assigned to the network terraform created
      • Assigned static IP’s 100-103
    • The first two instances will be bound to the two floating IP’s
  • Create a Load Balancer Pool, Monitor and VIP.

Before we go ahead and $terraform plan ; $terraform apply  this configuration.. A couple of notes.

Terraform Instance References / Variables

This configuration introduces a lot of resources, each resource may have a set of required and optional fields.

Some of these fields require the UUID/ID of other openstack resources, but as we haven’t created any of the infrastructure yet via  $terraform apply , we can’t be expected to know the UUID of objects that don’t yet exist.

Terraform allows you to reference other resources in the configuration file by their terraform resource name, terraform will then order the creation of resources and dynamically fill in the required information when needed.

For example. In the following resource section, we need the ID of an Openstack Neutron network in order to create a subnet under it. The ID of the network is not known, as it doesn’t yet exist. So instead a reference to our named instance of the the openstack_network_v2 resource,   tf_network  is used and from that resource we want the ID passing to the subnet resource hence the .id  at the end.

Regions

You will notice each resource has a region=""  field. This is a required field in the openstack terraform provider module for every resource (try deleting it, $terraform plan  will error).

If your openstack target is not region aware/enabled, then you must set the region to null in this way.

Environment specific knowledge

Even with dynamic referencing of ID’s explained above, you are still not going to be able to copy, paste, save and $terraform apply , as there are references in the configuration specific to my openstack environment, just like username, password and openstack API URL in demo1, in demo2 you will need to provide the following in your copy of the configuration:

  • Your own keypair public key
  • The ID of your environment’s ‘external gateway’ network for binding your Neutron router too.
  • The pool name(s) to request floating IP’s from.
  • The Name/ID of a glance image to boot the instances from.
  • The Flavour name(s) of your environment’s instances.

I have placed a sanitised version of the configuration file in a gist, with these locations clearly marked by <<USER_INPUT_NEEDED>> to make the above items easier to find/edit.

http://goo.gl/B3x1o4

Creating the Infrastructure 

With your edits to the configuration done:

Terraform Apply! (for the final time in this post!)

Enjoy your new infrastructure!

We can also confirm these items really do exist in openstack:

Destroying Infrastructure

$terraform destroy  will destroy your infrastructure. I find this often needs running twice, as certain objects (subnets, security groups etc) are still in use when terraform tries to delete them.

This could simply be our terraform API calls being quicker than the state update within openstack, there is a bug open with the openstack terraform provider.

First Run:

Second Run: Remaining resources are now removed.

Thats all for now boys and girls!

Enjoy your weekend.