Moving Onto Mirage
Git Your Unikernel Here!
For a little while I’ve had this site running as a MirageOS unikernel, shadowing the main site hosted on GitHub. I’ve finally decided to make the switch, as part of moving over to take advantage of Mirage’s DNS and TLS libraries.
- Construct a static Jekyll site.
- Write a Travis YAML file to cause Travis to build the unikernel image and commit it back to the deployment repository.
- Write a Git
post-mergehook for the deployment repository, so that the latest unikernel is automatically booted when a merge is detected, i.e., there is a new unikernel image.
- Write a
cronjob that periodically polls the deployment repository, pulling any changes.
Building a Jekyll site is well-documented – I did find that I had to tweak
_config.yml so as to make sure my local toolchain matched the
one used by Github, ensuring consistency between versions of the site. For
Bringing up the network
.travis.yml file then specifies the three main targets for
the CI test build to carry out: Unix with a standard sockets backed
MIRAGE_NET=socket) and with the Mirage network stack
MIRAGE_NET=direct), and with the Xen backend
MIRAGE_BACKEND=xen). For the latter case, we must also specify the static IP
configuration to be used (
.travis.sh script then calls the standard skeleton
.travis-mirage.sh script after first building the site
content using Jekyll.
This tests the three basic combinations of network backend for a Mirage appliance:
$ make configure.socket build
- UNIX/socket requires no configuration. The network device is configured
with the loopback address,
127.0.0.1. Appliances can be run without requiring
rootprivileges, assuming they only bind to non-privileged ports.
$ make configure.direct build
- UNIX/direct/dhcp requires no configuration if a DHCP server is running and
can respond. The appliance must be run with
rootprivileges to use the new network bridging capability of OSX 10.10, whereupon the DHCP client in the appliance follows the usual protocol.
$ make configure.xen build \ ADDR="188.8.131.52" GWS="184.108.40.206" MASK="255.255.255.128"
- Xen uses the Mirage network stack and expects static configuration of the network device.
Using Travis CI
Of course, all that is for local development – for the live site, this is
actually all wrapped up using Travis CI. Due to a small pull request
waiting on the OCaml Travis CI skeleton scripts and a few
Mirage releases currently being readied, this looks a little more complex than
it needs to (the
DEV_REMOTE variables shouldn’t need to be
specified in the long run) but anyway:
language: c script: bash -ex .travis.sh env: matrix: - FORK_USER=mor1 DEV_REMOTE=git://github.com/mirage/mirage-dev OCAML_VERSION=4.02 MIRAGE_BACKEND=unix MIRAGE_NET=socket - FORK_USER=mor1 DEV_REMOTE=git://github.com/mirage/mirage-dev OCAML_VERSION=4.02 MIRAGE_BACKEND=unix MIRAGE_NET=direct - FORK_USER=mor1 DEV_REMOTE=git://github.com/mirage/mirage-dev UPDATE_GCC_BINUTILS=1 OCAML_VERSION=4.02 MIRAGE_BACKEND=xen MIRAGE_ADDR="220.127.116.11" MIRAGE_GWS="18.104.22.168" MIRAGE_MASK="255.255.255.128" XENIMG=mortio MIRDIR=_mirage DEPLOY=1
This uses the local
.travis-sh script to build the three versions
of the site, using the Mirage development OPAM repository so as to
pick up the latest versions of all the various packages, and updating the Travis
binutils to ensure the stubs for a couple of packages (notably
Next stop: adding TLS and DNS support…