Measuring internet consistency with, plotly and docker

While measuring latency of your internet link over time is as simple as something like this;

I couldn’t find anything which would give a good measure of bandwidth consistency over time.

Now this is understandable, you need to use bandwidth to properly measure bandwidth (especially if you don’t run the infrastructure), so it’s a less useful tool in the regular network engineers arsenal, with useful tools such as Speedtest.NET designed for a one-off, point in time check of internet speed.

But with considerable oversubscription common amongst home ISP’s, I was looking for a way to graph trends in my internet connectivity, at an interval which wouldn’t cause the network issues, but would give me a lot more insight than the occasional, manual test.

This is when I stumbled upon; An excellent python implementation of the web-browser client for the Speedtest.NET. Suddenly, we have a decent command line interface to Speedtest.NET!

There are even options to output the results to CSV or JSON, ideal for most!

But what if we want this to be truly hands off monitoring? Enter and Docker!

Plotly gives us an API and python bindings to easily submit and update data to a plotly hosted graph… for free (as long as the graphs are public, which is cool).

So I added some basic plotly support.

Gives us a graph in plotly like this…

Run it again; you get a second data point.

Enter docker… and we can package this up into a one-hit measurement command which will run anywhere, adding each new dataset (with timestamp) to the graph.

You can run this yourself with one command;

All you’ll need is a API key (which you see us passing into the container with the  -v directory mount).

I’ve added more plotly info to the readme in the PR to speedtest-cli for those that want to know more;

And here’s my live graph (click for live).





Using ZFS with LinuxKit and Moby.

After raising an issue with compiling ZFS; the awesome LinuxKit community have designed this into their kernel build process.

Here’s a howto on building a simple (SSH) LinuxKit ISO, which has the ZFS drivers loaded.

Before we start, if you’re not familiar with LinuxKit or why you’d want to build a LinuxKit OS image, this article may not be for you; but for those interested in the what, why and how, the docs give a really good overview:

LinuxKit Architecture Overview.
Configuration Reference.

Step 0. Lets start at the very beginning.

To kick us off, we’ll take an existing LinuxKit configuration, below. This will create us a system with SSH via public key auth, and also running an *insecure* root shell on the local terminal.

We would then build this into an ISO with the moby command:

moby build -format iso-bios simplessh.yml

Moby goes and packages up our containers, init’s and kernel and gives us a bootable ISO. Awesome.

I booted the ISO in virtualbox (there is a nicer way of testing images with Docker’s Hyperkit using the linuxkit CLI, but Virtualbox is closer to my use case)

Step 1. The ZFS Kernel Modules

The ZFS kernel modules aren’t going to get shipped in the default LinuxKit kernel images (such as the one we’re using above). Instead, they can be compiled and added to your build as a separate image.

Here is one I built earlier, notice the extra container entry under the init: section, coming from my docker hub repo (trxuk). You’ll see i’m also using a kernel that was built at the same time, so i’m 100% sure they are compatible.

We have also added a “modprobe”  onboot:  service, this just runs  modprobe zfs  to load the zfs module and it’s dependencies (SPL, etc etc) on boot.

The metadata in the modprobe image automatically map /lib/modules  and /sys  into the modprobe container, so it can access the modules it needs to modprobe.

For those interested, you can see that metadata here in the package build:

Also, it’s worth highlighting that this module is still a PR for linuxkit, hence the image is also coming from my own dockerhub repo at the moment:

1.1 Using my images

As my docker hub repo’s are public, you could go ahead and use the configuration i’ve provided above, to use my builds of the kernel, zfs modules and modprobe package. You end up booting into a system that looks like this:

Huzzah! Success!

The observant amongst you may have noticed that the SSH image is also now coming from my  trxuk/sshd  docker hub repo; this is because i’ve edited it to have the zfs userspace tools (zpool, zfs) built in, instead of having to run the following on each boot:

1.2 Building your own ZFS image

I built the images above using work recently committed into the linuxkit repo by Rolf Neugebauer, thanks to him, if you wanted to build your own, you now easily can;

Once those commands have finished, you’ll have two new dockerhub repo’s, one containing the kernel and the other is the matching zfs-kmod image to use in the  init: section.

There is an issue currently preventing the  zfs-kmod image from working with modprobe (depmod appears to run in the build but the output modules.dep doesn’t end up including zfs) i’ll be opening a PR to resolve this, if you’re building your own module as above, you may want to hold off.

I hope this helps; I’ve only just got to this stage myself (very much by standing on others shoulders!) so watch out for deeper ZFS and LinuxKit articles coming soon! 🙂