Saturday, January 30, 2016

unikernels & production

A recent blog called into question the fitness of unikernels for production. The title was a bit misleading as there are several unikernel  systems out there. some of which are actually in production - one of our faves is the NEC/Bucharest Uni work on ClickOS, for example, which is used for NFV on switches and is clearly a class act.

However, I think the article is also missing some of the main motives behind MirageOS (see e.g. Jitsu or the asplos paper) which was based in experiences with managing a lot of Xen based cloud systems - sure, Unikernels are specialised, and don't possess a lot of the micro-management/debugging tools (yet, although a lot are on the way) that you have for kernel debugging or system tracing of linux etc etc. But that's because OCaml real world experience in production was that you have faster system creation, and way faster debugging times. However, that's still not the whole story- the story is that the whole toolchain for managing source, building a unikernel, deploying it and tracing it is much more homogeneous - so a whole system of unikernels is easier to manage (as per previous experience).

Crucially, we are also able to verify some components of the MirageOS (e.g. Peter Sewell's group in cambridge did this (for some definition of "this") a while back for the TCP/IP stack, plus confidence about David and Hannes TLS implementation can be quite a bit higher than the "industry standard" that had 65 vulnerabilities in one year alone.

But all this is missing yet another key factor - unikernels don't replace xen/linux or containers - they play side-by-side with them, so you can have flexibility and familiarity, while affording better protection - that's in the Jitsu paper btw, and I thought was fairly clear.

Sure there's some way to go - there always is -there was when Xen first shipped too. But the computer science behind this is not that bleeding edge (nor were VMs back in Xensource's day either:-), but the science is 15 years further on, and we should all benefit from that, in my opinion. Indeed, it took a day to add profiling


xkcd has us in there thrice

Wednesday, January 27, 2016

readings in computer science found in a time capsule

recently, I was re-reading the classic old article on Smashing the stack for fun and profit in phrack, and wondering where the positive alternative lesson might be. Quite a while ago, I did a port of the SR(Synchronized Resources) programming language out of Arizona, to the newly minted RS6000 system out of IBM. The language is an elegant system for teaching principles of concurrency in lots of nice ways - at the time we didnt have a single agreed way to do it (e.g. wrong like in C11, or possibly ok in Java), so there was a need for a pedagogic approach and SR together with a nice book was very cool. However, we'd just bought a bunch of new IBM AIX systems with the new POWER PC RISC processor, which had a whole new instruction set - lets leave aside the amising idea of a "reduced" or "regular" instruction set that includes "floating point multiple and add" in its portfolio. However, in terms of registers, its a pretty nice system.

So why is porting SR to the Power CPU reminiscent of stack smashing, I hear you cry?
Because, you need to implement the threads system it uses to emulate real multi-core, I hear myself answer. And what does it take to do that, you continue? well you need to save the current context (like all the registers and thread state (i.e. PC, stack) move to a different thread/stack by calling the scheduler with a pointer to that context. To do this involves becoming familiar with the stack format used for most programming languages.as well as all the important (i.e. all) registers etc. So basically, slightly more than the Phrack folks....basically, writing stuff that moves to another exection context, but can get back again correctly, is harder:-)
You can see some nice examples of the context switch stuff for a variety of processors in the (not supported) SR archive

Meanwhile, I was also reading about the integrity check used for DNSSEC caches. so that will be reported elsewhere, but the interesting thing is that its a weak version of the IP header (and TCP) checksum algorithm. Again this is something that excercises computer science #101 - you need to add up all the 16 bit fields in the buffer (imagine you have a n byte buffer, then if n is odd, add a zero value byte and sum it as a vector of 16 bit values using "end round carry" (or ones-complement) artithmentic - basically, you have a 32 bit accumulator, and loop over the buffer. when you are done, you do one more thing - in the checksum case: check any bits above the 16 least sig are set, and fold them in (add again) and then do one more add in case that overflows too. In the integrity check case, just 0 any bits in the more significant 16 bits (i.e. && accumulator with 0x0ffff). For the ARM IP cksum case, see this
code with inline asm - the crucial bit's lines 78&79 after the bne loop.

amusing, back in the early days, i remember someone loop unrolling asm code for hte M68000 (that's not a 68020 or 68010, but 68000 - that was sold as a "Codata" computer in the uk, but was a Sun-1, i think, never sold by Sun) - the code went slower as the loop was now to big to fit in the miniscule instruction cache of said CPU...

hum...........what could possibly go oddly wrong with the allegedly simpler (by 1 instruction) integrity check algorithm? I leave that as an exercise for the coder

Thursday, January 07, 2016

counciling the UK research councils

The UK research councils are unique in the way proposals are reviewed in several ways (in my experience)

firstly, unlike almost any other research funding agency (unlike DARPA, NSF in USA, CNRS in france, DFG in germany, the Japanese, scandinavian, and just about any other national funding agency I have dine reviewing for, which is a lot), the officers are not seconded experts so the assignment of reviewers depends on the reviewers self description - notoriously inaccurate. Unlike a journal (where the editor is an expert) there's not really a _peer_ review assignment process

secondly, the reviews can be rebutted by the proposer, but since the reviewers don't see each others reviews, they can't be calibrated against each other (unlike a conference or journal)

thirdly, the panel are not the reviewers and are not allowed to re-review the proposal, even if they are experts and only in exceptional circumstances will they discount an obviously incompetent or inappropriate review.

As a recipient of significant funding from the research councils, I am not expressing this through some sour grapes emotion, but more on behalf of my bewildered junior colleagues, who frequently receive inexplicably odd reviews and panel decisions. This is not good for community trust in the system - it may be ok at the obvious top 5-10% of proposals, but it results effectively in random decisions quite shortly after the very top (all 6s) ranked research. This is not good for confidence.

I can't give examples as that would be a breach of confidentiality, but everyone I know can tell a tale.

Maybe the new system after the recent review will involve people who have an answer to why the EPSRC and other councils should have a unique, and uniquely odd system. I have never heard an evidence based response to the comments above, which I have made several times to officials from the research councils. of course, since they themselves are not seconded from the community, how would they know, in any case. However, they could try talking to colleagues in other countries a bit more and see what works (or not) to persuade us that this is not just some random "we do it this way because we always have done"...