Visualising LinuxKit builds with Python and GraphViz

For the first time in a while I found myself still working at 5AM; by choice!

Why? LinuxKit! as you may have guessed from previous posts, i’m a fan.

Why LinuxKit?

For me, as a low level OS/Ops backgrounded geek, it’s like a 2017 version of discovering  “Linux From Scratch”, or even the first time pre-compiled binary packages appeared in Gentoo and saved you HOURS!

Exactly the bits of an OS you need for your solution, with community re-use of components, without re-inventing the wheel!

With higher order orchestrators such as Kubernetes needing less and less moving parts under them; The world of a more secure, more observable infrastructure, right down the stack is more within reach than ever!

Linuxkit is the perfect toolkit to build these lower level pieces.

The user or the builder?

I 100% believe that tooling such as LinuxKit will help millions run simpler infrastructure, potentially without knowing about it…

Like docker images on the hub now, the majority of users will benefit through re-use of the communities’ LinuxKit images.

I’d expect to even see whole open-source *Products* being LinuxKit based before the next DockerCon! If it’s an embedded system, or productized install. The user may never know the reason for the improved experience.

But what about the builders, the people working with the early LinuxKit code to build these images, to make these systems simpler, ephemeral and more composable…

It’s those people i’d like to ask; Do you struggle to debug complex linuxkit manifests? Do you spend time digging back and forth between namespace looking at where your mounts are? Which Image got a directory from where? Or why new mounts aren’t showing up in other places you expect them?

It’s 100% possible it’s just me; but I really found visualising the build (and the mounts) helped me develop much quicker.

So, back to the 5AM… I hacked something together using Python and GraphViz to take in a LinuxKit mainifest (yaml), read the metadata labels from each of the docker images within and plot all of the mounts (along with image data) onto a GraphViz diagram.

Output from LinuxKitVis
Output from LinuxKitVis

I’ve put the code on github here, and would love to get your feedback!

Find me on twitter as @mattdashj or in the DockerCommunity #Linuxkit room on slack! (TrXuk).

PS. To all those in Europe for DockerCon… have a WHALE of a time! (sorry).

 

Using ZFS with LinuxKit and Moby.

After raising an issue with compiling ZFS; the awesome LinuxKit community have designed this into their kernel build process.

Here’s a howto on building a simple (SSH) LinuxKit ISO, which has the ZFS drivers loaded.

Before we start, if you’re not familiar with LinuxKit or why you’d want to build a LinuxKit OS image, this article may not be for you; but for those interested in the what, why and how, the docs give a really good overview:

LinuxKit Architecture Overview.
Configuration Reference.

Step 0. Lets start at the very beginning.

To kick us off, we’ll take an existing LinuxKit configuration, below. This will create us a system with SSH via public key auth, and also running an *insecure* root shell on the local terminal.

We would then build this into an ISO with the moby command:

moby build -format iso-bios simplessh.yml

Moby goes and packages up our containers, init’s and kernel and gives us a bootable ISO. Awesome.

I booted the ISO in virtualbox (there is a nicer way of testing images with Docker’s Hyperkit using the linuxkit CLI, but Virtualbox is closer to my use case)

Step 1. The ZFS Kernel Modules

The ZFS kernel modules aren’t going to get shipped in the default LinuxKit kernel images (such as the one we’re using above). Instead, they can be compiled and added to your build as a separate image.

Here is one I built earlier, notice the extra container entry under the init: section, coming from my docker hub repo (trxuk). You’ll see i’m also using a kernel that was built at the same time, so i’m 100% sure they are compatible.

We have also added a “modprobe”  onboot:  service, this just runs  modprobe zfs  to load the zfs module and it’s dependencies (SPL, etc etc) on boot.

The metadata in the modprobe image automatically map /lib/modules  and /sys  into the modprobe container, so it can access the modules it needs to modprobe.

For those interested, you can see that metadata here in the package build: https://github.com/eyz/linuxkit/blob/be1172294ac66144bedaaaa98270ea0a5c95d140/pkg/modprobe/Dockerfile#L17

Also, it’s worth highlighting that this module is still a PR for linuxkit, hence the image is also coming from my own dockerhub repo at the moment: https://github.com/linuxkit/linuxkit/pull/2506

1.1 Using my images

As my docker hub repo’s are public, you could go ahead and use the configuration i’ve provided above, to use my builds of the kernel, zfs modules and modprobe package. You end up booting into a system that looks like this:

Huzzah! Success!

The observant amongst you may have noticed that the SSH image is also now coming from my  trxuk/sshd  docker hub repo; this is because i’ve edited it to have the zfs userspace tools (zpool, zfs) built in, instead of having to run the following on each boot:

1.2 Building your own ZFS image

I built the images above using work recently committed into the linuxkit repo by Rolf Neugebauer, thanks to him, if you wanted to build your own, you now easily can;

Once those commands have finished, you’ll have two new dockerhub repo’s, one containing the kernel and the other is the matching zfs-kmod image to use in the  init: section.

There is an issue currently preventing the  zfs-kmod image from working with modprobe (depmod appears to run in the build but the output modules.dep doesn’t end up including zfs) i’ll be opening a PR to resolve this, if you’re building your own module as above, you may want to hold off.

I hope this helps; I’ve only just got to this stage myself (very much by standing on others shoulders!) so watch out for deeper ZFS and LinuxKit articles coming soon! 🙂