Measuring internet consistency with speedtest.net, plotly and docker

While measuring latency of your internet link over time is as simple as something like this;

I couldn’t find anything which would give a good measure of bandwidth consistency over time.

Now this is understandable, you need to use bandwidth to properly measure bandwidth (especially if you don’t run the infrastructure), so it’s a less useful tool in the regular network engineers arsenal, with useful tools such as Speedtest.NET designed for a one-off, point in time check of internet speed.

But with considerable oversubscription common amongst home ISP’s, I was looking for a way to graph trends in my internet connectivity, at an interval which wouldn’t cause the network issues, but would give me a lot more insight than the occasional, manual test.

This is when I stumbled upon https://github.com/sivel/speedtest-cli; An excellent python implementation of the web-browser client for the Speedtest.NET. Suddenly, we have a decent command line interface to Speedtest.NET!

There are even options to output the results to CSV or JSON, ideal for most!

But what if we want this to be truly hands off monitoring? Enter Plot.ly and Docker!

Plotly gives us an API and python bindings to easily submit and update data to a plotly hosted graph… for free (as long as the graphs are public, which is cool).

So I added some basic plotly support.

Gives us a graph in plotly like this…

Run it again; you get a second data point.

Enter docker… and we can package this up into a one-hit measurement command which will run anywhere, adding each new dataset (with timestamp) to the graph.

You can run this yourself with one command;

All you’ll need is a plot.ly API key (which you see us passing into the container with the  -v directory mount).

I’ve added more plotly info to the readme in the PR to speedtest-cli for those that want to know more;

https://github.com/matjohn2/speedtest-cli/blob/dbb1e9ae1e295920141377be4649a6505b897b01/README.rst

And here’s my live graph (click for live).

 

 

 

 

Using ZFS with LinuxKit and Moby.

After raising an issue with compiling ZFS; the awesome LinuxKit community have designed this into their kernel build process.

Here’s a howto on building a simple (SSH) LinuxKit ISO, which has the ZFS drivers loaded.

Before we start, if you’re not familiar with LinuxKit or why you’d want to build a LinuxKit OS image, this article may not be for you; but for those interested in the what, why and how, the docs give a really good overview:

LinuxKit Architecture Overview.
Configuration Reference.

Step 0. Lets start at the very beginning.

To kick us off, we’ll take an existing LinuxKit configuration, below. This will create us a system with SSH via public key auth, and also running an *insecure* root shell on the local terminal.

We would then build this into an ISO with the moby command:

moby build -format iso-bios simplessh.yml

Moby goes and packages up our containers, init’s and kernel and gives us a bootable ISO. Awesome.

I booted the ISO in virtualbox (there is a nicer way of testing images with Docker’s Hyperkit using the linuxkit CLI, but Virtualbox is closer to my use case)

Step 1. The ZFS Kernel Modules

The ZFS kernel modules aren’t going to get shipped in the default LinuxKit kernel images (such as the one we’re using above). Instead, they can be compiled and added to your build as a separate image.

Here is one I built earlier, notice the extra container entry under the init: section, coming from my docker hub repo (trxuk). You’ll see i’m also using a kernel that was built at the same time, so i’m 100% sure they are compatible.

We have also added a “modprobe”  onboot:  service, this just runs  modprobe zfs  to load the zfs module and it’s dependencies (SPL, etc etc) on boot.

The metadata in the modprobe image automatically map /lib/modules  and /sys  into the modprobe container, so it can access the modules it needs to modprobe.

For those interested, you can see that metadata here in the package build: https://github.com/eyz/linuxkit/blob/be1172294ac66144bedaaaa98270ea0a5c95d140/pkg/modprobe/Dockerfile#L17

Also, it’s worth highlighting that this module is still a PR for linuxkit, hence the image is also coming from my own dockerhub repo at the moment: https://github.com/linuxkit/linuxkit/pull/2506

1.1 Using my images

As my docker hub repo’s are public, you could go ahead and use the configuration i’ve provided above, to use my builds of the kernel, zfs modules and modprobe package. You end up booting into a system that looks like this:

Huzzah! Success!

The observant amongst you may have noticed that the SSH image is also now coming from my  trxuk/sshd  docker hub repo; this is because i’ve edited it to have the zfs userspace tools (zpool, zfs) built in, instead of having to run the following on each boot:

1.2 Building your own ZFS image

I built the images above using work recently committed into the linuxkit repo by Rolf Neugebauer, thanks to him, if you wanted to build your own, you now easily can;

Once those commands have finished, you’ll have two new dockerhub repo’s, one containing the kernel and the other is the matching zfs-kmod image to use in the  init: section.

There is an issue currently preventing the  zfs-kmod image from working with modprobe (depmod appears to run in the build but the output modules.dep doesn’t end up including zfs) i’ll be opening a PR to resolve this, if you’re building your own module as above, you may want to hold off.

I hope this helps; I’ve only just got to this stage myself (very much by standing on others shoulders!) so watch out for deeper ZFS and LinuxKit articles coming soon! 🙂

 

Automatic DNS and SSL on Kubernetes with LetsEncrypt – Part 2

TL;DR Part 2 of how to make your Kubernetes cluster super awesome by adding two pods which automatically handle public DNS registration and SSL certs for any deployment you choose! Reduces the complexity of deployments and reduces manual extra tasks.

Part one recap.

In part one we discussed the advantages of the Kubernetes Ingress Controller and configured our cluster to automatically register the public IP’s of ingress controllers into AWS Route53 DNS using an annotation.

Go and catch up HERE if you missed it!

TLS / SSL

Now it’s time to add the really fun stuff, we already know what subdomain we want to register with our ingress controller (you did read part one right?), so we have all the information we need to automatically configure SSL for that domain as well!

There’s something awesome about deploying an app to Kubernetes, browsing to the URL you configured and seeing a happy green BROWSER VALID SSL connection already setup.

Free, Browser Valid SSL Certificates…. as long as you automate!

If you haven’t heard of LetsEncrypt, then this blog post is going to give you an extra bonus present. Lets encrypt is a browser trusted certificate authority, which charges nothing for it’s certs and is also fully automated!

This means; Code can request a domain cert, prove to the certificate authority that the server making the request actually owns the domain (by placing specific content on the webserver for the CA to check) and then receive the valid certificate back, all via the LetsEncrypt API.

If you want to know how this all works, visit the Lets Encrypt – How It Works Page..

Lets Encrypt Automated Proof of Ownership
Lets Encrypt Automated Proof of Ownership

Without LetsEncrypt, this process would have manual steps for validation as with most other CA’s and potentially no API for requesting certs at all. We really must thank their efforts for making all this possible.

Using Lets Encrypt with Kubernetes Ingress Controllers

Much like with the automatic DNS problem, google returns more questions than solutions, with different bits of projects and GitHub issues suggesting a number of paths. This blog post aims to distill all of my research and what worked for me.

After testing a few different things, I found that a project called Kube-Lego did exactly what I wanted;

  • Supports configuring both GCE Ingress Controllers and NginX ingress controllers with LetsEncrypt Certs (I’m using GCE in this example).
  • Supports automatic renewals and the automated proof of ownership needed by LetsEncrypt.

Another reason I liked kube-lego, is that it’s standalone. The LetsEncrypt code isn’t embedded in the LoadBalancer (Ingress Controller) code itself, this would have caused me problems:

  1. I’m using Googles’ GCE loadbalancers so I have no access to their code anyway.
  2. Even if I was running my own NginX/Caddy/Etc ingress controller pods, If LetsEncrypt was embedded, I’d need to write some clustering logic in order to have more than one instance of them running, otherwise all of them would race each other to get a cert for the same domain and i’d end up in a mess (and rate limited from the LetsEncrypt API).

KubeLego seemed like the most flexible choice.

Installing KubeLego

Installation is pretty simple, as the documentation at https://github.com/jetstack/kube-lego was much better than the dns-controller from Part 1 of this article.

Firstly, we configure a ConfigMap that the kube-lego pod will get settings from, i’ve saved this as kube-lego-config-map.yaml

Now we need a deployment manifest for the kube-lego app itself, i’ve saved this as kube-lego.yaml

Notice our Deployment references our configmap to pull settings for e-mail and API endpont. Also notice the app exposes port 8080, more on that later!

We can now deploy (both configmap and app) onto our k8’s cluster:

Voila! We’re running kube-lego on our cluster.

Testing Kube-lego

You can view the logs to see what kube-lego is doing, by default it will listen for all new ingress-controllers and take action with certs if they have certain annotations which we’ll cover below.

Also, if the application fails to start for whatever reason, the health check in the deployment manifest above will fail and Kubernetes will restart the pod.

your pod name will differ for the logs command:

Putting it all together

Here we are going to create an app deployment for which we want all this magic to happen!

However, there is a reason automatic DNS registration was part one of this blog series.. LetsEncrypt validation depends on resolving the domain requested down to our K8’s cluster, so if you haven’t enabled automatic DNS (or put the ingress controllers public IP in DNS yourself), then LetsEncrypt will never be able to validate ownership of the domain and therefore never give you a certificate!

May be worth revisiting part1 of this series if you haven’t already (it’s good, honest!)

App Deployment manifests

If you’re familiar with Kubernetes, then you’ll recognise the following manifests simply deploy us a nginx sample ‘application’ in a new namespace. The differences to enable DNS and SSL are all in the ingress controller definition.

namespace.yaml Creates our new namespace:

nginx.yaml  Deploys our application:

service.yaml  is needed to track active backend endpoints for the Ingress Controller (notice it’s of type: NodePort, it’s not publicly exposed)

Finally, our Ingress Controller. ingress-tls.yaml  I’ve highlighted the ‘non standard’ bits which enable our automated magic.

Lets deploy these manifests to our Kubernetes cluster and watch the magic happen!

Right, now our ingress controller is going to go and configure a GCE load balancer for us (standard), this will be allocated a public IP and our dns-controller will register this against our hostname in Route53:

And looking in our AWS Route53 portal:

Route53 showing Updated DNS Records
Route53 showing Updated DNS Records

Excellent!

While this was happening. kube-lego was also configuring the GCE loadbalancer to support LetsEcrypt’s ownership checks. We look at the LoadBalancer configuration in Google’s cloud console and see that a specific URL path has been configured to point to the kube-lego app on 8080.

This allows kube-lego to control the validation requests for domain ownership that will come in from LetsEncrypt when we request a certificate. All other request paths will be passed to our actual app.

LetsEncrypt configuration on ingress LB via kube-lego
Kube-lego adds configuration to the ingress loadbalancer to pass LetsEncrypt ownership challenges

This will allow the kube-lego process (requesting certs via LetsEncrypt) to succeed:

As soon as a valid cert is received, kube-lego re-configures the GCE LoadBalancer for HTTPS as well as HTTP (notice in the above screenshot, only Protocol HTTP is enabled on the LB when it is first created).

Kube-lego configures SSL certificate into GCE's ingress load balancer
Kube-lego configures SSL certificate into GCE’s ingress load balancer

The Result

The whole process above takes a couple of mins to complete (LB getting a public IP, DNS Registration, LetsEncrypt Checks, Get Cert, Configure LB with SSL) but then… Huzzah! Completley hands off publicly available services, protected by valid SSL certs!

Now your developers can deploy applications which are SSL by default without any extra hassle.

Appreciate any corrections, comments of feedback, please direct to @mattdashj on twitter.

Until next time!

Matt

Automatic DNS and SSL on Kubernetes with LetsEncrypt – Part 1

TL;DR Howto make your Kubernetes cluster super awesome by adding two pods which automatically handle public DNS registration and SSL certs for any deployment you choose! Reduces the complexity of deployments and reduces manual extra tasks.

Overview

Kubernetes Ingress controllers provide developers an API for creating HTTP/HTTPS (L7) proxies in front of your applications, something that historically we’ve done ourselves; Either inside our application pod’s with our apps, or more likley, as a separate set of pods infront of our application, strung together with Kubernetes Service’s (L4).

Without Ingress Controller

Public kubernetes service flow without ingress controllers

With Ingress Controller

Ingress controller simplifying k8s services

Technically, there is still a Service in the background to track membership, but it’s not in the “path of traffic” as it is in the first diagram.

Whats more, Ingress controllers are pluggable, a single Kubernetes API to developers, but any L7 load balancer in reality, be it Nginx, GCE, Treifik, or Hardware… Excellent.

However, there are some things Ingress controllers *DONT* do for us, and that is what I want to tackle today…

  1. Registering our ingress loadbalancer in public DNS with a useful domain name.
  2. Automatically getting SSL/TLS certificates for our domain and configuring them on our Ingress load balancer.

With these two additions developers can deploy their application to K8’s, and automatically have it accessible and TLS secured.. Perfect!

DNS First

DNS is fairly simple, yet a google search for this topic makes it sound anything but. Lots of different suggestions, github issues, half-started projects.

All we want it something to listen for new Ingress Controllers, find the public IP given to the new Ingress loadbalancer and update DNS with the apps DNS name and loadbalancer IP.

After some research, code exists to do exactly what we want, it’s called ‘dns-controller’ and it’s now part of the ‘kops’ codebase from the cluster-ops SIG. It currently updates AWS Route53, but thats fine, as it’s what i’m using anyway.

https://github.com/kubernetes/kops/tree/master/dns-controller

However, the documentation is slim and unless you’re using KOPS, it’s not packaged in a useful way. Thankfully, someone has already extracted the dns-controller pieces and packaged them in a docker container for us.

The security guy in me points out: If you’re looking at anything more than testing, i’d strongly recommend packaging the DNS-Controller code yourself so you know 100% whats in it.

DNS – Here’s how to deploy (1/2)

Create the following deployment.yaml manifest

This pulls down our pre-packaged dns-controller code and runs it on our cluster. By default i’ve placed this in the kube-system namespace.

The code needs to change AWS Route53 DNS entries *duh*, so it also needs AWS Credentials.

(I recommend using AWS IAM to create a user with ONLY the access to the Route53 zone you need this app to control. Don’t give it your developer keys, anyone in your K8’s cluster could potentially read them)

When we’ve got our credentials, create a secret with your AWS credentials file in it as follows..

we’ve The path to your AWS credentials file will differ. If you don’t have a credentials file, it’s a simple format as shown below.

Now deploy your dns-controller into K8’s with  kubectl create -f deployment.yaml

You can query the applications logs to see it working, by default it will try to update any DNS domain it finds configured for an Ingress controller with a matching zone in Route53.

Example log output:

You will see errors here if it cannot find your AWS credentials (check your secret) and that the credentials are valid!

Using our new automated DNS service.

Right! How do we use it? This is an ingress controller without automatic DNS..

And this is one WITH our new automatic DNS registration..

Simply add the annotation  dns.alpha.kubernetes.io/external: "true" to any ingress controller and our new dns-controller will try to add the domain listed under  - host: app.ourdomain.com  to DNS with the public IP of the Ingress controller.

Try it out! My cluster is on GCE (a GKE cluster), we’re using the google load balancers. I’m noticing they take around ~60 seconds to get a public IP assigned, so DNS can take ~90-120 seconds to be populated. That said, I don’t need to re-deploy my ingress controllers with my software deployments, so this is acceptable for me.

In the next section, we’ll configure automatic SSL certificate generation and configuration for our GCE load balancers!

Go read part two now, CLICK HERE.

Comments or suggestions? Please find me on twitter @mattdashj

DCOS.io OpenDCOS Authentication Token

Looking to script some containers against an OpenDCOS Deployment however the authentication for OpenDCOS is OAuth against either Google, Github or Microsoft.

DCOS login options screenshot
DCOS login options

 

The docs (here) discusses requesting an auth token for a given user, but the API URL/Path doesn’t seem to work in OpenDCOS.

Turns out, the correct URL is below. Paste in a browser, authenticate and your token will be provided.

https://<YOUR-DCOS-MASTER-IP>/login?redirect_uri=urn:ietf:wg:oauth:2.0:oob

This is the same URL you’ll be asked to authenticate against if you install the DCOS local CLI.

You can then send this in any requests to the DCOS services (such as marathon) using a HTTP header as below:

 

OpenStack infrastructure automation with Terraform – Part 2

TL;DR: Second of a two post series looking at automation of an openstack project with Terraform, using the new Terraform OpenStack Provider.

With the Openstack provider for Terraform being close to accepted into the Terraform release, it’s time to unleash it’s power on the Cisco Openstack-based Cloud..

In this post, we will:

  • Write a terraform ‘.TF’ file to describe our desired deployment state including;
    • Neutron networks/subnets
    • Neutron gateways
    • Keypairs and Security Groups
    • Virtual machines and Volumes
    • Virtual IP’s
    • Load balancers (LBaaS).
  • Have terraform deploy, modify and rip down our infrastructure.

If you don’t have the terraform openstack beta provider available, you’ll want to read Part 1 of this series.

Terraform Intro

Terraform “provides a common configuration to launch infrastructure“. From IaaS instances and virtual networks to DNS entries and e-mail configuration.

The idea being that a single Terraform deployment file can leverage multiple providers to describe your entire application infrastructure in one deployment tool; even if your DNS, LB and Compute resources come from three different providers.

Support for different infrastructure types is supported by provider modules, it’s the Openstack provider we’re focused on testing here.

If you’re not sure why you want to use Terraform, you’re probably best getting off here and having a look around Terraform.io first!

Terraform Configuration Files

Terraform configuration files describe your desired infrastructure state, built up of multiple resources, using one or more providers.

Configuration files are a custom, but easy to read format with a .TF extension. (They can also be written in JSON for machine generated content.)

Generally, a configuration file will hold necessary parameters for any providers needed, followed by a number of resources from those providers.

Below is a simple example with one provider (Openstack) and one resource (an SSH public key to be uploaded to our Openstack tenant)

Save the above as  demo1.tf and replace the following placeholders with your own Openstack environment login details.

Now run $terraform plan  in the same directory as your demo1.tf  file. Terraform will tell you what it’s going to do (add/remove/update resources), based on checking the current state of the infrastructure:

Terraform checks, the keypair doesn’t already exist on our openstack provider, so a new resource is going to be created if we apply our infrastructure… good!

Terraform Apply!

Success! At this point you can check Openstack to confirm our new keypair exists in the IaaS:

 

Terraform State

Future deployments of this infrastructure will check the state first, running $terraform plan  again shows no changes, as our single resource already exists in Openstack.

That’s basic terraform deployment covered using the openstack provider.

Adding More Resources

The resource we deployed above was ‘ openstack_compute_keypair_v2 ‘. Resource types are named by the author of a given plugin! not centrally by terraform (which means TF config files are not re-usable between differing provider infrastructures).

Realistically this just means you need to read the doc of the provider(s) you choose to use.

Here are some openstack provider resource types we’ll use for the next demo:

“openstack_compute_keypair_v2”
“openstack_compute_secgroup_v2”
“openstack_networking_network_v2” 
“openstack_networking_subnet_v2”
“openstack_networking_router_v2”
“openstack_networking_router_interface_v2”
“openstack_compute_floatingip_v2”
“openstack_compute_instance_v2”
“openstack_lb_monitor_v1”
“openstack_lb_pool_v1”
“openstack_lb_vip_v1”

If you are familiar with Openstack, then their purpose should be clear!

The following Terraform configuration will build on our existing configuration to:

  • Upload a keypair
  • Create a security group
    • SSH and HTTPS in, plus all TCP in from other VM’s in same group.
  • Create a new Quantum network and Subnet
  • Create a new Quantum router with an external gateway
  • Assign the network to the router (router interface)
  • Request two floating IP’s into our Openstack project
  • Spin up three instances of CentOS7 based on an existing image in glance
    • With sample metadata provided in our .tf configuration file
    • Assigned to the security group terraform created
    • Using the keypair terraform created
    • Assigned to the network terraform created
      • Assigned static IP’s 100-103
    • The first two instances will be bound to the two floating IP’s
  • Create a Load Balancer Pool, Monitor and VIP.

Before we go ahead and $terraform plan ; $terraform apply  this configuration.. A couple of notes.

Terraform Instance References / Variables

This configuration introduces a lot of resources, each resource may have a set of required and optional fields.

Some of these fields require the UUID/ID of other openstack resources, but as we haven’t created any of the infrastructure yet via  $terraform apply , we can’t be expected to know the UUID of objects that don’t yet exist.

Terraform allows you to reference other resources in the configuration file by their terraform resource name, terraform will then order the creation of resources and dynamically fill in the required information when needed.

For example. In the following resource section, we need the ID of an Openstack Neutron network in order to create a subnet under it. The ID of the network is not known, as it doesn’t yet exist. So instead a reference to our named instance of the the openstack_network_v2 resource,   tf_network  is used and from that resource we want the ID passing to the subnet resource hence the .id  at the end.

Regions

You will notice each resource has a region=""  field. This is a required field in the openstack terraform provider module for every resource (try deleting it, $terraform plan  will error).

If your openstack target is not region aware/enabled, then you must set the region to null in this way.

Environment specific knowledge

Even with dynamic referencing of ID’s explained above, you are still not going to be able to copy, paste, save and $terraform apply , as there are references in the configuration specific to my openstack environment, just like username, password and openstack API URL in demo1, in demo2 you will need to provide the following in your copy of the configuration:

  • Your own keypair public key
  • The ID of your environment’s ‘external gateway’ network for binding your Neutron router too.
  • The pool name(s) to request floating IP’s from.
  • The Name/ID of a glance image to boot the instances from.
  • The Flavour name(s) of your environment’s instances.

I have placed a sanitised version of the configuration file in a gist, with these locations clearly marked by <<USER_INPUT_NEEDED>> to make the above items easier to find/edit.

http://goo.gl/B3x1o4

Creating the Infrastructure 

With your edits to the configuration done:

Terraform Apply! (for the final time in this post!)

Enjoy your new infrastructure!

We can also confirm these items really do exist in openstack:

Destroying Infrastructure

$terraform destroy  will destroy your infrastructure. I find this often needs running twice, as certain objects (subnets, security groups etc) are still in use when terraform tries to delete them.

This could simply be our terraform API calls being quicker than the state update within openstack, there is a bug open with the openstack terraform provider.

First Run:

Second Run: Remaining resources are now removed.

Thats all for now boys and girls!

Enjoy your weekend.

 

 

 

OpenStack infrastructure automation with Terraform – Part 1

Update: The Openstack provider has been merged into terraform. It comes with the terraform default download as of 0.4.0.

Get it HERE: https://terraform.io/downloads.html

Then proceed directly to the second part of this series to get up and running with Terraform on Openstack quickly!

Or.. read more below for the original post.

Continue reading OpenStack infrastructure automation with Terraform – Part 1

Openstack and PaaS; Love a good geek rift!

I’ve been in the bay area, CA for a couple of weeks now (excluding my cheeky jaunt to vegas!) and even though i’m now on vacation, it’s been the perfect place to watch the OpenStack Havana Drama unfold; Mostly stemming from this catalyst;

http://www.mirantis.com/blog/openstack-havanas-stern-warning-open-source-or-die/

Well (for me, anyway) especially this bit;

Too many times we see our customers exploring OpenShift or Cloud Foundry for a while, and then electing instead to use a combination of Heat for orchestration, Trove for the database, LBaaS for elasticity, then glue it all together with scripts and Python code and have a native and supported solution for provisioning their apps.

Hell no! Was my initial reaction, and while there has been a definite retraction from the tone of the whole post… I still think a hell no is where I stand on this.

And i’ll tell you why, but firstly;

  • I like Openstack, as an IaaS. I like it’s modularity for networking and the innovation taking place to provide a rock-solid IaaS layer.
  • It was a much needed alternative to VMWare for a lot of people and it’s growth into stability is something i’ve enjoyed watching (competition is never a bad thing right! 😉 ).

That said, here’s why I’ll take my PaaS served right now, with a sprinkling of CloudFoundry;

  • People tying things together themselves with chunks of internally-written scripts/python (i’d argue even puppet/chef as we strive for more portability across public/private boundaries) is exactly the kind of production environment we want to move away from as an industry;
    • Non-portable.
    • Siloed to that particular company (or more likely, project/team.)
    • Often badly maintained due to knowledge attrition.

.. and into the future;

  • Defined, separated layers with nothing connecting them but an externally facing API was, in my mind, the very POINT of IaaS/PaaS/XaaS and their clear boundaries.
  • These boundaries allow for portability.
    • Between private IaaS providers and the PaaS/SaaS stack.
    • Between public/private cloud-burt style scenarios.
    • For complex HA setups requiring active/active service across multiple, underlying provider technologies.
      • think ‘defence in depth’ for IaaS.
      • This may sound far fetched, but actually is and has already been used to back SLA’s and protect against downtime without requiring different tooling in each location. 
    • I just don’t see how a 1:1 mapping of PaaS and IaaS inside OpenStack is a good thing for people trying to consume the cloud in a flexible and ‘unlocked’ mannor.

It could easily be argued that if we are only talking about private and not public IaaS consumption, i’d have less points to make above; Sure, but I guess it depends on if you really believe the future will be thousands of per-enterprise, siloed, private IaaS/PaaS installations, each with their own specifics.

As an aside, another concern I have with Openstack in general right now is the providers implementing OpenStack. Yes there is an OpenStack API, but it’s amazing how many variations on this there are (maybe i’ll do the maths some day);

  • API versions
  • Custom additions (i’m looking at you, Rackspace!)
  • Full vs. Custom implementation of all/some OpenStack components.

Translate this to the future idea of PaaS and IaaS being offered within OpenStack, and i see conflicting requirements;

From an IaaS I’d want;

  • Easy to move between/consume IaaS providers.
  • Not all IaaS providers necessarily need the same API, but it would be nice if it was one per ‘type’ to make libraries/wrappers/Fog etc easier.

From a PaaS i’d want;

  • Ease of use for Developers
  • Abstracted service integration
    • IaaS / PaaS providers may not be my best option for certain data storage
    • I don’t want to be constrained to the development speed of a monolithic (P+I)aaS stack to test out new Key-Value-Store-X
  • Above all, PORTABILITY

This seems directly against the above for IaaS…

Ie, I don’t mind having to abstract my PaaS installation/management from multiple IaaS API’s so that I can support multiple clouds,(Especially if my PaaS management/installation system can handle that for me!); however i DON’T want lots of potential differences in the presentation in my PaaS API causing issues for the ‘ease of use, just care about your code’ aspect for developers.

I’m not sure where this post stopped becoming a nice short piece and more of a ramble, but i’ll take this as a good place to stop. PaaS vendors are not going anywhere imho and marketing-through-bold-statements on the web is very much still alive 😉

Matt

 

 

 

 

Pivotal Cloud Foundry Conference and Multi-Site PaaS

So I recently got back from Pivotal’s first Cloud Foundry conference;

pivotalcfconf

 

as I’m not a developer, I guess by the power of deduction I’ll settle for cloud leader.

While there, this newly appointed cloud leader, erm, lead a pretty popular discussion on multi-site PaaS (with a focus on Cloud Foundry, but a lot of the ideas translate) with the intention of stirring up enough technical and business intrigue to move the conversation into something of substance and action.

I’m about to kick off further discussions on the VCAP-DEV mailing list (https://groups.google.com/a/cloudfoundry.org/forum/#!forum/vcap-dev), but draft run here can’t hurt

  • Having multiple separate PaaS instances is cheating (and a pain for everyone involved)
  • My definition of multi-site PaaS is currently to support application deployment and horizontal scalability in two or more datacentres concurrently, providing resilience scalability beyond what is currently available.
  • The multi-site PaaS should have a single point of end user/customer interaction.
  • Key advantages should be simplified management, visibility and control for end customers.
  • Multi-Site PaaS should be architected in such a way as to prevent a SPOF or inter-DC dependencies

All well and good saying WHAT i want, how about how?

  • A good starting point would be something that sits above the current cloud foundry components (ie, something new that all api calls hit first) that could perform extensions to existing functions;
    • Extend API functionality for multi-site operations
      • cf scale –-sites 3 <app>
      • cf apps –-site <name>
      • etc
    • Build up a map of PaaS-wide routes/DNS A records to aid in steering requests to the correct sites ‘Gorouters’
    • This may also be a good place to implement SSL with support for per-application SSL certificates as these will need to be available in any site the application is spanned to also.
  • Further thoughts that i need to pin down
    • I’d like to see this getting health state from each DC and redirecting new requests if necessary to best utilize capacity (taking into account latency to the client of the request)

Where this layer will sit physically, and how it will become this unification layer without becoming the SPOF point itself is still playing around my mind.
Current thoughts include;

  • Calling the layer Grouter (global-router) for convenience.
  • This layer will be made up of multiple instances of GRouters
  • Passive listening to NATS messages WITHIN the G-Router instances site for all DNS names/apps used at that site.
  • Distributed protocol for then sharing these names to all other Grouters (NOT A SHARED DB/SPOF).
    • No power to overwrite another G-Routers DB
    • Maybe something can be taken from the world of IP routing where local routes outrank that of remote updates, but can be used if need be.
    • Loss of DB/updates causes Grouter instance to fail open and forward all requests to local GoRouters (failure mode still allows local site independent operation).
    • DNS should only be steering real PaaS app requests or API calls to the GRouter layer although may be for different DC, Depending on use of Anycast DNS *needs idea development*
    • Grouters In normal mode can send redirects for requests for Apps that are In other DC’s to allow a global any-cast DNS infrastructure
  • Multiple instances per DC just like GoRouters so they scale.

Other points which will need discussion are;

  • How does a global ‘cf apps’ work
  • Droplets will need saving to multi-DC blob stores for re-use in scale/outage scenarios
  • We will need a way to cope with applications that have to stay within a specific geographic region.
  • Blue/Green zero downtime deployments?
    • One site at a time
    • Two app instances within each DC and re-mapping routes
    • Second option would prevent GRouter layer needing to be involved in orchestration, reducing complexity.

My next steps are to map out use cases and matching data flows embodying my ideas above.

Just FYI, all this was written on a SJL>LHR flight were awake-ness was clinging to me like the plague, so expect revisions!

Matt

 

It used to just be paperwork that took time to work out!

So the delivery of a new corporate laptop this week got me thinking;

It’s much more powerful, portable and generally nicer than anything I own and the fact of the matter is; if I’m near a computer for five out of the seven days every week, it’s going to be this one.

This ties into my last post about BYOD and staff ‘bringing their own data’. This works backwards to corporate-supplied devices too;

BYOD is currently, lets be honest, only really bringing your own ‘addon’ devices, and not your full digital working requirement.

BYODv2 in my mind will be where users fully bring every bit of local compute power they need to work, maybe minus the clutter/heavyness of standard peripherals such as screens and keyboards, instead have a uniform, wireless dock interface.

So for as long as there are corporate laptops, which their users are more likely to have on them than any personal laptop for a massive percentage of their weekly lives, there will always be remnants of their own personal data from lunchtimes, weekends, corporate travel, after works etc.

I digress, back to MY new shiny corporate laptop;

  • I needed a kick to get all my data, which is currently spread everywhere into some semblance of order and peace of mind over backups etc.
  • I hate always being on the wrong device for what I’m looking for (ooh, I took that photo on my iPhone, not my laptop/tablet/Nexus 4/Tomorrows device number seventy six and a half).
  • I don’t like having to swap between personal/corp equipment, especially now the corp laptop is actually much more powerful than my own!

Turns out, this isn’t something I should have thought about, as while the cloud solves some of these data-everywhere questions, it also opens up a lot more choice, ie a lot more questions on howto go about all this, it’s amazing just how many branches you end up with in your mind when you try and tackle that which, on the face of it, is a simple ‘get to my data on my devices’ question.

Even without going into all the corporate access stuff, my personal digital life and (more to the point) access to it, created this monstrosity, can you do any worse? Have you had the same thoughts and come up with a solution that works well? Do you want to talk to me about cool ideas for BYODv2 above? Comment or e-mail.

-Matt

Mind Map of how to get to my data on all my devices
Oh dear, it’s 3AM again