01 Jan 2019, 18:00

Why RISC-V?

You might have noticed me tweeting a bunch about RISC-V in recent months. It is actually something I have been following for several years now, since the formation of LowRISC in Cambridge quite some time ago, but this year has suddenly seen a huge maturing of the ecosystem.

In case you have been sitting under a rock hacking on something for some time, RISC-V is an open instruction set for CPUs. It is pronounced “risk five”. It looks a bit like MIPS, if you know your instruction sets, and yes it is very RISC, pretty minimal really. It is designed to be cleanly extended, and has 32, 64 and 128 bit implementations. So far the 32 bit version is for microcontrollers, the 64 bit for operating systems like Linux with MMUs, and the 128 bit version is for future dreams.

But an instruction set, even one without licensing and patent issues, is not that interesting on its own. There are some other options there after all, although they all have some issues. What is more interesting is that there are open and freely modifiable open source implementations. Lots of them. There are proprietary ones too, and hybrid ones with some closed IP and some open, but the community has been building open. Not just open cores, but new open toolchains (largely written in Scala) for design, test, simulation and so on.

SiFive core designer

The size of the community growth this year has been huge, starting with the launch by SiFive of the first commercially available RISC-V machine that could run Linux at Fosdem in January. Going to a RISC-V meetup (they are springing up in Silicon Valley, Cambridge, Bristol and Israel) you feel that this is hardware done by people who want to do hardware like open source software is done. People are building cores, running in silicon or on FPGA, tooling, secure enclaves, operating systems, VC funded business and revenue funded businesses. You meet people from Arm at these meetups, finding out what is going on, while Intel is funding RISC-V businesses, as if they want to make serious competition for Arm or something! Meanwhile MIPS has opened its ISA as a somewhat late reaction.

A few years ago RISC-V was replacing a few small microcontrollers and custom CPUs, now we see companies like Western Digital announcing they will switch all their cores to RISC-V, while opening their designs. There are lots of AI/TPU cores being built with RISC-V cores, and Esperanto is building chips with over a thousand 64 bit RISC-V cores on. The market for specialist AI chips came along at the same time as RISC-V was maturing, and it was a logical new market.

RISC-V is by no means mature; it is forecast it will ship 10-100 million cores in 2019, the majority of them 32 bit microcontrollers, but that adds to the interest, it is at the stage where you can now start building things, and lots of people are building things for fun or serious reasons, or porting code, or developing formal ISA models or whatever. Open source wins because a huge community just decides it is the future and rallies around every piece of the ecosystem. 2018 was the year that movement became really visible for RISC-V.

I haven’t started hacking on any RISC-V code yet, but I have an idea for a little side project, but I have joined the RISC-V Foundation as an individual member and hope to get to the RISC-V Workshop in Zurich and several meetups. See you there and happy hacking!

28 Dec 2018, 14:00

2018 Conferences

I gave quite a few talks this year, and also organized several conference tracks.

Config Mangement Camp

It was an excellent Config Management Camp this year, and fun to speak at.

QCon London 2018

I organized the Modern Computer Science in the Real World Track at this conference, it was a great set of talks.

I also spoke in the Modern Operating Systems track

KubeCon Cloud Native Europe

DockerCon

Registration required to watch videos.

Oscon

All Things Open

I don’t think this was recorded.

QCon SF

I curated the Modern Operating Systems track, and spoke on it. The videos are coming out on 7 and 14 January

  • Thomas Graf, How to Make Linux Microservice-Aware With Cilium and eBPF
  • Alan Kasindorf, Caching Beyond RAM: The Case for NVMe
  • Justin Cormack, The Modern Operating System in 2018, a somewhat changed version of my QCon London talk
  • Adin Scannell, gVisor: Building and Battle Testing a Userspace OS in Go
  • Bryan Cantrill, Is It Time to Rewrite the Operating System in Rust? (Don’t miss this!)

DockerCon EU

Registration required to watch videos. I helped organize the Black Belt track which had some great talks:

I gave a joint talk on Open Policy Agent and a re-run of the earlier talk with Liz Rice

Kubecon Cloud Native US

Upcoming

Don’t miss the Modern Operating Systems track at QCon London which I am curating, should be excellent.

  • Jessie Frazelle on eBPF
  • Avi Deitcher on LinuxKit
  • Kenton Varda on Cloudflare Workers
  • others TBC

I am planning or hoping to attend in 2019 at least the events below, but also no dount several other ones.

27 Dec 2018, 19:00

Confused Deputies Strike Back

A few weeks back Kubernetes had its first really severe security issue, CVE-2018-1002105. For some background on this, and how it was discovered, I recommend Darren Shepherd’s blog post, he discovered it via some side effects and initially it did not appear to be a security issue just an error handling issue. Of course we know well that many error handling issues can be escalated, but why was this one so bad?

To summarize the problem, there is an API server proxy component, that clients can use to talk to other API endpoints. As the postmortem document says

  • Kubernetes API server proxy components still use http/1.1 upgrade-based connection tunneling, which does not distinguish between request data sent by the apiserver while establishing the backend connection, and data sent by the requesting user

  • High and low-privilege API requests to aggregated API servers are proxied via the same component with the same high-permission transport credentials

Well, this security issue is actually well known enough to have its own name, it is the confused deputy problem, originally written about by Norm Hardy in 1988 although referring to an original example from the 1970s. The essence of the problem is that there are three parties involved, a user, a proxy or deputy type component and an object or service that needs to be accessed, or a similar set of endpoints. The user connects to the deputy to perform an action on an object, but the deputy can be persuaded to act on an object that it has access to rather than one the end user has access to.

Imagine asking your accountant to fill in your tax return. Your accountant has access to your tax return, but also to those of other customers. If the accountant is buggy or can be confused she could fill in one of these tax returns instead of yours. The general problem is that in order to run a tax return filing service, you need the ability to fill in lots of different people’s tax returns. You become a very privileged node, a superuser of tax returns. The tax office has to respect your authority to fill in lots of tax returns, and read them, so the accountant’s credentials must be very privileged. We see similar designs in all sorts of places, like suid applications in Unix that can do operations on behalf of any user and must be very highly trusted, and are often the source of security bugs.

What is the solution? Well we can not have these deputies. Fill in your own tax return! But in effect this says do not use microservices. If every endpoint needs to have the code for filling in tax returns we lose the benefit of microservices, we have to update lots of endpoints together, we cannot have a team building better accountant services and so on. What we really want is that the accountant does not have to be a superuser, but instead she has no permissions on her own but we can pass credentials (maybe time limited) to update our tax return (but not to generally impersonate us) with our request. This access control model is called capability-based security: access is granted via unforgeable but transferable tokens that provide access to objects. You can imagine they are keys, like passing your car key to a valet service, rather than the valet service having a master key for all cars that they might need to park.

The standard access control list (ACL) models of authorization are all about making decisions based on identity, a concept that clearly must not be transferable. I never want my accountant to have to (or be able to) pretend to be me to fill in my tax return. The classic solution in this case would be for me to be able to add additional people to the ACL for my tax return; this is modeled in new ACL frameworks like NGAC from NIST (sorry no link right now the website is down due to the government shutdown). This does not immediately seem applicable to the Kubernetes issue though, and is much more complex than passing my API access credential to the API proxy server. At this point I highly recommend the excellent short paper ACLs don’t by Tyler Close, one of my favourite papers (I should do a papers we love session on it). His examples mainly come from the browser, another prevalent deputy with a lot of security issues, such as CSRF another confused deputy attack. Capabilities are actually very simple to understand and reason about.

ACL based security is fine for many situations, in particular where there are only two parties and you just want to mediate access to a set of resources. But microservices do not appear to be in that sweet spot, as Kubernetes found out with its API proxy microservice. Bugs can be fixed, but as the retrospective points out all changes will need to be examined for security issues. As Tyler Close says “the correct implementation of an access policy cannot be ascertained by an examination of the ACLs configured for an application, but must also include an examination of the program’s source code. To date, this technique has been error prone.” It was not even the only bug that week that was a confused deputy issue, the Zoom critical bug was the same issue, where UDP packets could confuse the deputy service. These are critical issues happening on a regular basis, and no doubt many more lurk.

The entire reason for microservices is to have third parties to delegate services to, and we need to shift away from ACL based models to capabilities for microservices. Of course this is non trivial, distributed capabilities (as opposed to local ones) have not been used much and we don’t have a good infrastructure for them yet. I will write more about practicalities in a further post, but we need to start shifting security to be microservice native too not just adopting things that worked for monoliths.

26 Dec 2018, 20:00

QUIC for Unikernels

I had until recently mostly been ignoring QUIC. In case you had, QUIC is a new-ish protocol developed to Google, that will probably be HTTP version 3. The interesting pieces are that it runs over UDP not TCP, but supports reliable delivery by implementing retransmission itself, that it supports multiple streams without head of line blocking, and that it is designed to support encryption natively. Another important benefit is that connections can migrate from one IP to another without being dropped, as there is a connection ID independent of source and destination addresses. It is also designed to not ossify, with as much as possible of the packet encrypted as possible, so that intermediaries cannot inspect what is inside the packet and make decisions, to avoid the great difficulty in changing TCP where adding new extensions has a small chance of working as middle boxes will often strip things they do not understand, or not let the packets through. If all you can see is UDP flows it is harder to do much. Another benefit is faster handshake for the encrypted case. Proposed extensions include different algorithms for congestion control, for example for different environments like in datacentre or high latency connections, and forward error correction.

I had partly been ignoring QUIC as it has not yet been finalised, and had some temporary encryption included that was going to be removed to be replaced by TLS 1.3, and also as it seemed to be very tied up with HTTP/2. I also had some idea that layering encryption over TCP in possibly slightly non spec compliant ways might make sense. But then a few weeks back, a paper on nQUIC or Noise on QUIC came out, and I decided to take another look. It turned out that there were other people interested in removing the strong tie to TLS, and also looking further it seems that the protocol is not that tied to HTTP, and it does provide a general transport with multiple streams. The IETF standard drafts split out the TLS implementation, and it looks like there is interest in pushing for a standard Noise based version. Quic is not significantly more complex than TCP, especially as you can in effect hard code the number of streams if you do not want to use that feature, for example on an embedded system. Noise over QUIC, without HTTP looks pretty reasonable for small systems that have enough performance to do encryption and a little memory, even down to microcontrollers. You could even customise it for some applications in closed environments.

So what has this got to do with unikernels? Well the interesting thing about QUIC is that it always runs in userspace, not in the kernel on conventional systems. So that puts unikernels on an equal footing, they can use the same implementations as other applications use. There are already implementations in C, C++, Go, Rust, TypeScript, Objective C, Python, and no doubt more. Interfacing QUIC to a transport stack is pretty simple as UDP is just a thin layer over ethernet. There is no reason why the implementation should be any less efficient, indeed it can probably be made more efficient as it csn bypass several abstraction layers.

There are some potential issues in that some firewalls block QUIC (which is typically on UDP port 443); browsers will switch to TCP in that case. A QUIC only unikernel might not have that luxury, especially in some embedded situation. Larger machines can still fall back to TCP, but that can be a less optimised version. The main use case for QUIC would initially be for traffic between dedicated unikernel or embedded services, especially if you are using Noise rather than TLS for a very small implementation, not public endpoints. There are some concerns that the CPU overhead of QUIC is higher, so it may not be suitable for embedded applications, and there are no benefits over TCP for those cases. But there is freedom to iterate in a way that there is much less with TCP, so I think it is definitely worth examining. Research in whether CPU overhead is a necessary part of the protocol, and how to measure efficiency in different environments is also productive.

22 Sep 2018, 12:00

Distributed Capabilities via Cryptography

This is a follow up to my previous capabilities post. As before, you probably want to read Capability Myths Demolished and the Noise Protocol specification first for full value extraction. This is a pretty rough draft, I was going to rewrite it but decided just to publish as is, work in progress, and write another post later, having left it for several weeks after writing it. This stuff needs a much clearer explanation.

I went to a Protocol Labs dinner last night (thanks for the invite Mike) and managed to corner Brian Warner from Agoric and ask about cryptographic distributed capabilities, which was quite helpful. This stuff has not really been written down, so here is an attempt to do so. I should probably add references at some point.

Non cryptographic capabilities

For reference and completeness, let us cover how you transmit capabilities without any cryptography, and what the downsides are. The basic model is called the “Swiss number”, after a (I suspect somewhat mythical) model of an anonymous Swiss bank account, where you just present the account number, which you must keep secret, in order to deposit and withdraw money, no questions asked. This is pretty much the standard model in the historic literature, largely written before public key cryptography was feasible to use. In modern terms, the Swiss number capability should be a random 256 bit number, and the connection should of course be encrypted to prevent snooping. The implementation is easy, just check (in constant time!) that the presented number is equal to the object’s number. Minting these is trivial. The capability is a random bearer token.

The downsides are pretty clear. First is that you may present the capability to the wrong party for checking. Checking and transfer of capabilities are very different operations, and we would like that checking did not reveal the token. This is a general problem with bearer tokens, such as JWT, they can easily be presented to the wrong party, or to a man in the middle attacker. We would like cryptographic protection for the check operation to avoid this. The second downside, which is somewhat related, is that we have no idea how to identify the intended object. Any party who has a copy of the capability can pretend to be the object it refers to, as there is no asymmetry between parties. We have to rely on some external naming system, that might be subverted. The third issue is that we have to build our own encryption, and the token we have does not help, as it does not act as a key or help identify the other party. So we have to rely on anonymous key exchange, which is subjet to man in the middle attacks as we do not know an identity for the other participant, or again some sort of external source of truth, such as the PKI system.

These downsides are pretty critical for modern secure software, so we need to do better. We will refer to these three properties, check does not reveal, object identifier, and encryption included to analyze some alternatives.

There are some things I am not going to discuss in this post. I mentioned the model of secret public keys, which appears in some of the literature, in an earlier post, but will ignore it here as it has security issues. I am not going to cover macaroons either; they are another form of bearer token with differently interesting properties.

Cryptographic Capabilities

The obvious way to solve the second problem, of being able to identify the object that the capability refers to securely, is to give the object an asymmetric key. We can then hand out the public key, which can uusefully be the object identifier and be used to locate it, while the object keeps its private key secure, and does not hand it out to any other object (it can be kept in a TPM type device as it is only needed for restricted operations). We can now set up an encrypted channel with this object, and as we know the public key up front, we can be sure we have connected to the right object if we validate this correctly. In Noise Protocol terms, we can use an NK handshake pattern, where the connecting object is anonymous but it knows the public key it is connecting to. We can also use XK (or IK) if we want to pass the object identity of the connecting object, for example for audit purposes. Once we have connected, we can use the Swissnum model to demonstrate we have the capability, but without the risk of passing the capability to the wrong party.

However, we can improve this, by using the Swissnum as a symmetric key, and incorporating it as a secret known by both parties into the asymmetric handshake. In Noise Protocol terms this is the NKpsk0 handshake (or XKpsk0) that I mentioned in my previous post. The handshake will only succeed if both parties have the same key, as the key is securely mixed into the shared symmetric key that is generated from the Diffie-Hellman exchange of the public keys. This is even better than the Swissnum method above, as the handshake is shorter as you do not need the extra phase to pass and potentially acknowledge the Swissnum; it looks pretty similar as a symmetric key is generally just an arbitrary random sequence of 256 bits or so anyway.

This model does solve all our three issues, as a handshake to the wrong party does not reveal the capability, the object cannot be spoofed by another (without stealing the private key) and the keys support and encrypted channel. It is not the only mechanism however. Minting new capabilities is easy, you just create a new symmetric key, and creating objects is easy, create an asymmetric keypair.

Instead of using an asymmetric key and a symmetric key, Brian Warner pointed out to me yesterday that we can present a certificate instead of the symmetric key. This is slightly more complex. To demonstrate possession of a capability, we will present a certificate to the object. We have to sign an ephemeral that the object presents us, and the simplest method is if the object that the capability is for has the public key to check the signature, and the capability is the private signing key. Anyone with the capability can directly sign the certificate, and you pass the private key around to transfer the capability. Note that the subject of the capability does not need to know the private signing key, so it cannot necessarily pass on a capability to access itself. This might an advantage in some circumstances. Note that the holders of the capability need to transfer a private key to pass the capability on, so they cannot hold the key in a TPM device that does not allow key export, or indeed a general cryptographic API that only supports a private key type that has signing operations but not an export operation, which has been common practise. Note that the Noise Protocol Framework support for signatures is a work in progress, scheduled for a revision later this year.

If you don’t want to pass around private keys, you could use a chained signature model, where each party that passes on a capability adds to a signature chain, authenticating the public key of the next party, all chaining down to the original key. This would mean unbounded lengths of chain though, that would be a problem for many use cases. It would provider an audit trail of how each party got the capability, but transparency logs probably do this more effectively.

Thinking about this model, actually we do not need to use signatures, we can just use encryption keys directly. The same as before the object the capability is granted over has a private encryption key, but instead of using signatures, we can create an asymmetric encryption keypair, and give the object the public key, while capability holders get the private key, and pass the private key around as the capability. So to validate an encryption handshake, the object will check that the capability holder has the correct private key, while the capability holder will validate it is talking to the object that possesses the identiy private key. In Noise protocol terms this is a KK handshake, where both parties know the public key for the other party, and verify that each possesses the private key. The signature version is a KK variant with one signature substituted for anencryption key, and there is another variant where both keys are replaced by signatures, the Noise signature protocol modifiers allow sigantures to substitute for longer term with ephemeral Diffie-Hellman key agreement in any combination, with some deferral modifictions.

So we see that rather than using the mixed symmetric and asymmetric key model (NKpsk) that I discussed before, we can use symmetric key only (KK) models for distributed capabilities. The differences for the user are relatively small, as both methods fulfil our three criteria, we just have the difference that the object need not be able to pass on capabilities to itself in the public key only model, and the fact that we have to pass around asymmetric private keys, which there is a reluctance to do sometimes. For quantum resistence, it is possible to use a combination of both symmetric and asymmetric keys, sharing a symmetric key among all parties.

/* removed Google analytics */