By Anil Madhavapeddy - 2014-02-11
We've just released MirageOS 1.1.0 into OPAM. Once the
live site updates, you should be able to run opam update -u
and get the latest
version. This release is the "eat our own
dogfood" release; as I
mentioned earlier in January, a number of the MirageOS developers have decided to
shift our own personal homepages onto MirageOS. There's nothing better than
using our own tools to find all the little annoyances and shortcomings, and so
MirageOS 1.1.0 contains some significant usability and structural improvements
for building unikernels.
MirageOS separates the application logic from the concrete backend in use by writing the application as an OCaml functor that is parameterized over module types that represent the device driver signature. All of the module types used in MirageOS can be browsed in one source file.
In MirageOS 1.1.0, Thomas Gazagnaire implemented a a combinator library that makes it easy to separate the definition of application logic from the details of the device drivers that actually execute the code (be it a Unix binary or a dedicated Xen kernel). It lets us write code of this form (taken from mirage-skeleton/block):
let () =
let main = foreign "Unikernel.Block_test" (console @-> block @-> job) in
let img = block_of_file "disk.img" in
register "block_test" [main $ default_console $ img]
In this configuration fragment, our unikernel is defined as a functor over a
console and a block device by using console @-> block @-> job
. We then
define a concrete version of this job by applying the functor (using the $
combinator) to a default console and a file-backed disk image.
The combinator approach lets us express complex assemblies of device driver
graphs by writing normal OCaml code, and the mirage
command line tool
parses this at build-time and generates a main.ml
file that has all the
functors applied to the right device drivers. Any mismatches in module signatures
will result in a build error, thus helping to spot nonsensical combinations
(such as using a Unix network socket in a Xen unikernel).
This new feature is walked through in the tutorial, which now walks you through several skeleton examples to explain all the different deployment scenarios. It's also followed by the website tutorial that explains how this website works, and how our Travis autodeployment throws the result onto the public Internet.
Who will win the race to get our website up and running first? Sadly for Anil, Mort is currently in the lead with an all-singing, all-dancing shiny new website. Will he finish in the lead though? Stay tuned!
Something that's more behind-the-scenes, but important for easier development, is a simplication in how we build libraries. In MirageOS 1.0, we had several packages that couldn't be simultaneously installed, as they had to be compiled in just the right order to ensure dependencies.
With MirageOS 1.1.0, this is all a thing of the past. All the libraries can be installed fully in parallel, including the network stack. The 1.1.0 TCP/IP stack is now built in the style of the venerable FoxNet network stack, and is parameterized across its network dependencies. This means that once can quickly assemble a custom network stack from modular components, such as this little fragment below from mirage-skeleton/ethifv4/:
module Main (C: CONSOLE) (N: NETWORK) = struct
module E = Ethif.Make(N)
module I = Ipv4.Make(E)
module U = Udpv4.Make(I)
module T = Tcpv4.Flow.Make(I)(OS.Time)(Clock)(Random)
module D = Dhcp_clientv4.Make(C)(OS.Time)(Random)(E)(I)(U)
This functor stack starts with a NETWORK
(i.e. Ethernet) device, and then applies
functors until it ends up with a UDPv4, TCPv4 and DHCPv4 client. See the full
file
to see how the rest of the logic works, but this serves to illustrate how
MirageOS makes it possible to build custom network stacks out of modular
components. The functors also make it easier to embed the network stack in
non-MirageOS applications, and the tcpip
OPAM package installs pre-applied Unix
versions for your toplevel convenience.
To show just how powerful the functor approach is, the same stack can also be mapped onto a version that uses kernel sockets simply by abstracting the lower-level components into an equivalent that uses the Unix kernel to provide the same functionality. We explain how to swap between these variants in the tutorials.
While doing the 1.1.0 release in January, we've also released quite a few libraries into OPAM. Here are some of the highlights.
Low-level libraries:
Networking and web libraries:
Dave Scott led the splitting up of several low-level Xen libraries as part of the build simplication. These now compile on both Xen (using the direct hypercall interface) and Unix (using the dom0 /dev
devices) where possible.
All of Dave's hacking on Xen device drivers is showcased in this xen-disk wiki post that explains how you can synthesize your own virtual disk backends using MirageOS. Xen uses a split device model, and now MirageOS lets us build backend device drivers that service VMs as well as the frontends!
Last, but not least, Thomas Gazagnaire has been building a brand new storage system for MirageOS guests that uses git-style branches under the hood to help coordinate clusters of unikernels. We'll talk about how this works in a future update, but there are some cool libraries and prototypes available on OPAM for the curious.
ogit
command-line tool that it installs.We'd also like to thank several conference organizers for giving us the opportunity to demonstrate MirageOS. The talk video from QCon SF is now live, and we also had a great time at FOSDEM recently (summarized by Amir here). So lots of activities, and no doubt little bugs lurking in places (particularly around installation). As always, please do let us know of any problem by reporting bugs, or feel free to contact us via our e-mail lists or IRC. Next stop: our unikernel homepages!