21 Jul 2019, 20:52

Fuzz rising

Fuzz rising

Go and read the excellent blog post from Cloudflare on their recent outage if you haven’t already.

I am not going to talk about most of it, just a few small points that especially interest me right now, which are definitely not the most important things from the outage point of view. This post got a bit long so I split it up, so this is part one.

Fuzz testing has been around for quite some time. American Fuzzy Lop was released in 2013, and was the first fuzzer to need very little configuration to find security issues. This paper on mutational fuzzing is a starting point if you are interested in the details of how this works. The basic idea is that you start with a valid input, and gradually mutate it, looking for “interesting” changes that change the path the code takes. This is often coverage guided, so that you attempt to cover all code paths by changing input data.

Fuzz testing is not the only tool in the space of automated security issue detection. There is traditional static analysis tooling, although it is generally not very efficient at finding most security issues, other than a few things like SQL injection that are often well covered. It tends to have a high false positive rate, and unlike fuzz testing will not give you a helpful test case. Of course there are many other things to consider in comprehensive security testing, this list of considerations is very useful. Another technique is automated variant analysis, taking an existing issue and finding other cases of the same issue, as done by platforms such as Semmle.

Fuzzing as a service is available too. Operationally fuzzing is not something you want to run in your CI pipeline, as it is not a test that finishes, it is something that you should run continuously 247 on the latest version of your code to find issues, as itstill takes a long time to find issues, and is randomised. Services include Fuzzbuzz a fairly new commercial service (with a free tier) who are very friendly, Microsoft Security Risk Detection and Google’s OSS-Fuzz for open source projects.

As Cloudflare commented “In the last few years we have seen a dramatic increase in vulnerabilities in common applications. This has happened due to the increased availability of software testing tools, like fuzzing for example.” Some numbers give an idea of the scale: as of January 2019, Google’s ClusterFuzz has found around 16,000 bugs in Chrome and around 11,000 bugs in over 160 open source projects integrated with OSS-Fuzz. We can see the knock on effect on the rate of CVEs being reported.

If we look at the kinds of issues found, data from a 2017 Google blog post the breakdown is interesting.

As you can see a very large proportion are buffer overflows, manual memory management issues like use after free, and the “ubsan“ category, which is all the stuff in C or C++ code that if you happen to write it the compiler can turn your program into hot garbage if it feels like it. Memory safety is still a major cause of errors, as you can see if you follow the @LazyFishBarrel twitter account. Note that the majority of projects are still not running comprehensive automated testing for these issues, and this problem is rapidly increasing. Note that there are two factors at play: first, memory errors are an easier target than many other sorts of errors to find with current tooling, but second there is a huge codebase that has huge numbers of these errors.

Microsoft Security Response Center also just released a blog post with some more numbers. While ostensibly about Microsoft’s gradually increasing coding in Rust, the important quote is that “~70% of the vulnerabilities Microsoft assigns a CVE each year continue to be memory safety issues”.

In my talk at Kubecon I touch on some of these issues with C (and to some extent C++) code. The majority of the significant issues found in the CNCF security audits were in C or C++ code, despite the fact there is not much of the is code in the reviewed projects.

Most of the C and C++ code that causes the majority of open source CVEs is shipped in Linux distributions. Linux distros are the de facto package manager for C code, and C++ to a lesser extent; neither of these langauges have developed their own language specific package management yet. From the Debian stats, of the billion or so lines of code, 43% is ANSI C and 24% is C++ which has many of the same problems in many codebases. So 670 million lines of code, in general without enough maintainers to deal with the existing and coming waves of security issues that fuzzing will find. This is the backdrop of increasing complaints about unfixed CVEs in Docker containers, where these tend to me more visible due to wider use of scanning tools.

Is it worth fuzzing safer languages such as Go and Rust? Yes, you will still find edge conditions, and potentially other cases such as race conditions, although the payoff will not be nearly as high. For C code it is absolutely essential, but bugs and security issues are found elsewhere. Oh and fuzzing is fun!

My view is that we are just at the beginning of this spike, and we will not just find all the issues and move on. Rather we will end up with the Linux distributions, which have this code will end up as toxic industrial waste areas, the Agbogbloshie of the C era. As the incumbents, no they will not rewrite it in Rust, instead smaller more nimble different types of competitor will outmanouvre the dinosaurs. Linux distros generally consider that most of their role is packaging not creation, with a few exceptions like Systemd; most of their engineering work is in the long term support business, which still pays well despite being increasingly out of step with how non-C software is used, and how cloud deployments work, where updating software is part of normal life, and five or ten year software lifetimes without updates are not the target. We are not going to see the Linux distros work on solving this issue.

Is this code exploitable? Almost certainly yes with sufficient effort. We discussed Thomas Dulien’s paper Weird machines, exploitability, and provable unexploitability at the Säntis Systems Summit recently, I highly recommend it if you are interested in exploitability. But overall, proving code is not exploitable is in general not going to be possible, and attackers always have the advantage. Sure they will pick the easiest things first, but most attacks are automated now and attacking scales well. Security is risk management, but with memory safety being a relatively easy exploit in many cases, it is a high risk. Obviously not all this code is exposed to attackers via network or attacker supplied data, especially in containerised environments, but some is, and you will spend increasing amounts of time working out what is a risk. The sheer volume of security issues just makes risk management more difficult.

If you are a die hard C hacker and want to remain one, the last bastion of C is of course OpenBSD. Throw up the pledge barricades, remove anything you can, keep reviewing. That is the only heroic path left.

In the short term, start to explore and invest in ways to replace every legacy C dependency you are currently using. Write a deprecation roadmap. Cut down your dependencies on Linux distributions. Shift to memory safe languages everywhere, and if you use C++ make sure you only use the safer subset. Look to smaller more nimble Linux distributions that start shipping memory safe code; although the moves here have been slow so far, you only need a little as once distros stop having to be C package managers they can do a better job of being minimal userspaces. There isn’t much code you really need to run modern applications that themselves do not have many C dependencies, as implementations like LinuxKit show. If you just sit on top of the kernel, using its ABI stability guarantees there is little you need to do other than a little configuration; well other than worry about the bugs in a kernel written in … C.

Memory unsafe languages are not going to get better, or safe. It is time to move on.

27 Jan 2019, 19:00

Kubernetes as an API standard

There is now a rustyk8s mailing list to discuss implementations of the Kubernetes API in Rust.

There was a lot of interest in my tweet a couple of months about writing an implementation of the Kubernetes API in Rust. I had a good conversation at Kubecon with some people about it, and thought I should explain more why it is interesting.

Kubernetes is an excellent API for running code reliably. So much so that people want to run it everywhere. People have described it as the universal distributred systems API, and something that will eventually be embedded into hardware, or the kernel (or Linux) of distributed systems. Maybe some of these are ambitious, but nothing wrong with ambition, and hey it is a nice, simple API at its core. Essentially it just does reconciliation between the world and desired state for an extensible set of things, things that include a concept of a pod by default. That is pretty much it, a simple idea.

A simple idea, but not simply expressed. If you build a standalone Kubernetes system, somehow that simple idea amounts to a gigabyte of compiled code. Sure, there are some extraneous debug symbols, and a few extra versions of etcd for version upgrades, and maybe one day Go will produce less bloated code, but that is not going to cut it for embedded systems and other interesting potential use cases of Kubernetes. Nor is it easy to understand, find your way around the code and hack on it.

Another problem with Kubernetes is that it suffers from the problem that the implementation is the specification. Lots of projects start like that but as they mature the specification is often separated, and alternative implementations can thrive. Without an independent specification, alternative implementations often have to copy every accidental nuance of the original, and even replicate bugs. Kubernetes is in the right state where starting to move towards an independent specification would be productive. We know that there are some rough edges in the implementation that need to be cleared up, and some parts where the API is not yet the best it could be.

One approach is to try to cut back the current implementation to a more manageable size, by removing parts. This is what Darren Shepherd of Rancher has done with “k3s”, removing a million or so lines of code. But a second, complementary approach is to build a new simple implementation from the ground up without any baggage to start with. Then by looking at differences in behaviour, you can start to understand which parts are the core specification, and which parts are accidental. Given that the way the code for Kubernetes is written has been described as a “clusterfuck” by Kris Nova, this seems a productive route: “Unknown to most, Kubernetes was originally written in Java… If the anti patterns weren’t enough we also observe how Kubernetes has over 20 main() functions in a monolithic “build” directory… Kubernetes successfully made vendoring even more challenging than it already was, and discuss the pitfalls with this design. We look at what it would take to begin undoing the spaghetti code that is the various Kubernetes binaries.”

Of course we could write a new implementation in Go, but the temptation would then be to import bunches of existing code, and it might not end up that different. A different language makes sense to stop that. The aim should be to build the minimum needed to implement the code API. So what language? Rust makes the most sense it seems, although there are some other options.

There is a small but growing community of cloud native Rust projects. In the CNCF, there is TiKV from PingCAP and the Linkerd 2 data plane. Anther project that has recently been launched in the space is AWS Firecracker. The Rust ecosystem is especially strong in security, and control of memory usage, both of which are important for effective scalable systems. In the last year or so the core libraries needed in the cloud native space have really been filled in.

So are you interested in hacking on a greenfield implementation of Kubernetes in Rust? There is not yet a public codebase to hack on, but I know that there are some people hacking in private. The minimal viable project is something that you can talk to with kubectl and run pods, and API extensions. The conformance tests should help, although they are not complete enough to constitute a specification by any means, but starting to pass some tests would be a satisfying achievement. If you want to meet up with cloud native Rust community, a bunch of people will be at Fosdem in early February, and I will sort out a fringe even at KubeCon EU as well. Happy hacking!

01 Jan 2019, 18:00

Why RISC-V?

You might have noticed me tweeting a bunch about RISC-V in recent months. It is actually something I have been following for several years now, since the formation of LowRISC in Cambridge quite some time ago, but this year has suddenly seen a huge maturing of the ecosystem.

In case you have been sitting under a rock hacking on something for some time, RISC-V is an open instruction set for CPUs. It is pronounced “risk five”. It looks a bit like MIPS, if you know your instruction sets, and yes it is very RISC, pretty minimal really. It is designed to be cleanly extended, and has 32, 64 and 128 bit implementations. So far the 32 bit version is for microcontrollers, the 64 bit for operating systems like Linux with MMUs, and the 128 bit version is for future dreams.

But an instruction set, even one without licensing and patent issues, is not that interesting on its own. There are some other options there after all, although they all have some issues. What is more interesting is that there are open and freely modifiable open source implementations. Lots of them. There are proprietary ones too, and hybrid ones with some closed IP and some open, but the community has been building open. Not just open cores, but new open toolchains (largely written in Scala) for design, test, simulation and so on.

SiFive core designer

The size of the community growth this year has been huge, starting with the launch by SiFive of the first commercially available RISC-V machine that could run Linux at Fosdem in January. Going to a RISC-V meetup (they are springing up in Silicon Valley, Cambridge, Bristol and Israel) you feel that this is hardware done by people who want to do hardware like open source software is done. People are building cores, running in silicon or on FPGA, tooling, secure enclaves, operating systems, VC funded business and revenue funded businesses. You meet people from Arm at these meetups, finding out what is going on, while Intel is funding RISC-V businesses, as if they want to make serious competition for Arm or something! Meanwhile MIPS has opened its ISA as a somewhat late reaction.

A few years ago RISC-V was replacing a few small microcontrollers and custom CPUs, now we see companies like Western Digital announcing they will switch all their cores to RISC-V, while opening their designs. There are lots of AI/TPU cores being built with RISC-V cores, and Esperanto is building chips with over a thousand 64 bit RISC-V cores on. The market for specialist AI chips came along at the same time as RISC-V was maturing, and it was a logical new market.

RISC-V is by no means mature; it is forecast it will ship 10-100 million cores in 2019, the majority of them 32 bit microcontrollers, but that adds to the interest, it is at the stage where you can now start building things, and lots of people are building things for fun or serious reasons, or porting code, or developing formal ISA models or whatever. Open source wins because a huge community just decides it is the future and rallies around every piece of the ecosystem. 2018 was the year that movement became really visible for RISC-V.

I haven’t started hacking on any RISC-V code yet, but I have an idea for a little side project, but I have joined the RISC-V Foundation as an individual member and hope to get to the RISC-V Workshop in Zurich and several meetups. See you there and happy hacking!

28 Dec 2018, 14:00

2018 Conferences

I gave quite a few talks this year, and also organized several conference tracks.

Config Mangement Camp

It was an excellent Config Management Camp this year, and fun to speak at.

QCon London 2018

I organized the Modern Computer Science in the Real World Track at this conference, it was a great set of talks.

I also spoke in the Modern Operating Systems track

KubeCon Cloud Native Europe

DockerCon

Registration required to watch videos.

Oscon

All Things Open

I don’t think this was recorded.

QCon SF

I curated the Modern Operating Systems track, and spoke on it. The videos are coming out on 7 and 14 January

  • Thomas Graf, How to Make Linux Microservice-Aware With Cilium and eBPF
  • Alan Kasindorf, Caching Beyond RAM: The Case for NVMe
  • Justin Cormack, The Modern Operating System in 2018, a somewhat changed version of my QCon London talk
  • Adin Scannell, gVisor: Building and Battle Testing a Userspace OS in Go
  • Bryan Cantrill, Is It Time to Rewrite the Operating System in Rust? (Don’t miss this!)

DockerCon EU

Registration required to watch videos. I helped organize the Black Belt track which had some great talks:

I gave a joint talk on Open Policy Agent and a re-run of the earlier talk with Liz Rice

Kubecon Cloud Native US

Upcoming

Don’t miss the Modern Operating Systems track at QCon London which I am curating, should be excellent.

  • Jessie Frazelle on eBPF
  • Avi Deitcher on LinuxKit
  • Kenton Varda on Cloudflare Workers
  • others TBC

I am planning or hoping to attend in 2019 at least the events below, but also no dount several other ones.

27 Dec 2018, 19:00

Confused Deputies Strike Back

A few weeks back Kubernetes had its first really severe security issue, CVE-2018-1002105. For some background on this, and how it was discovered, I recommend Darren Shepherd’s blog post, he discovered it via some side effects and initially it did not appear to be a security issue just an error handling issue. Of course we know well that many error handling issues can be escalated, but why was this one so bad?

To summarize the problem, there is an API server proxy component, that clients can use to talk to other API endpoints. As the postmortem document says

  • Kubernetes API server proxy components still use http/1.1 upgrade-based connection tunneling, which does not distinguish between request data sent by the apiserver while establishing the backend connection, and data sent by the requesting user

  • High and low-privilege API requests to aggregated API servers are proxied via the same component with the same high-permission transport credentials

Well, this security issue is actually well known enough to have its own name, it is the confused deputy problem, originally written about by Norm Hardy in 1988 although referring to an original example from the 1970s. The essence of the problem is that there are three parties involved, a user, a proxy or deputy type component and an object or service that needs to be accessed, or a similar set of endpoints. The user connects to the deputy to perform an action on an object, but the deputy can be persuaded to act on an object that it has access to rather than one the end user has access to.

Imagine asking your accountant to fill in your tax return. Your accountant has access to your tax return, but also to those of other customers. If the accountant is buggy or can be confused she could fill in one of these tax returns instead of yours. The general problem is that in order to run a tax return filing service, you need the ability to fill in lots of different people’s tax returns. You become a very privileged node, a superuser of tax returns. The tax office has to respect your authority to fill in lots of tax returns, and read them, so the accountant’s credentials must be very privileged. We see similar designs in all sorts of places, like suid applications in Unix that can do operations on behalf of any user and must be very highly trusted, and are often the source of security bugs.

What is the solution? Well we can not have these deputies. Fill in your own tax return! But in effect this says do not use microservices. If every endpoint needs to have the code for filling in tax returns we lose the benefit of microservices, we have to update lots of endpoints together, we cannot have a team building better accountant services and so on. What we really want is that the accountant does not have to be a superuser, but instead she has no permissions on her own but we can pass credentials (maybe time limited) to update our tax return (but not to generally impersonate us) with our request. This access control model is called capability-based security: access is granted via unforgeable but transferable tokens that provide access to objects. You can imagine they are keys, like passing your car key to a valet service, rather than the valet service having a master key for all cars that they might need to park.

The standard access control list (ACL) models of authorization are all about making decisions based on identity, a concept that clearly must not be transferable. I never want my accountant to have to (or be able to) pretend to be me to fill in my tax return. The classic solution in this case would be for me to be able to add additional people to the ACL for my tax return; this is modeled in new ACL frameworks like NGAC from NIST (sorry no link right now the website is down due to the government shutdown). This does not immediately seem applicable to the Kubernetes issue though, and is much more complex than passing my API access credential to the API proxy server. At this point I highly recommend the excellent short paper ACLs don’t by Tyler Close, one of my favourite papers (I should do a papers we love session on it). His examples mainly come from the browser, another prevalent deputy with a lot of security issues, such as CSRF another confused deputy attack. Capabilities are actually very simple to understand and reason about.

ACL based security is fine for many situations, in particular where there are only two parties and you just want to mediate access to a set of resources. But microservices do not appear to be in that sweet spot, as Kubernetes found out with its API proxy microservice. Bugs can be fixed, but as the retrospective points out all changes will need to be examined for security issues. As Tyler Close says “the correct implementation of an access policy cannot be ascertained by an examination of the ACLs configured for an application, but must also include an examination of the program’s source code. To date, this technique has been error prone.” It was not even the only bug that week that was a confused deputy issue, the Zoom critical bug was the same issue, where UDP packets could confuse the deputy service. These are critical issues happening on a regular basis, and no doubt many more lurk.

The entire reason for microservices is to have third parties to delegate services to, and we need to shift away from ACL based models to capabilities for microservices. Of course this is non trivial, distributed capabilities (as opposed to local ones) have not been used much and we don’t have a good infrastructure for them yet. I will write more about practicalities in a further post, but we need to start shifting security to be microservice native too not just adopting things that worked for monoliths.

/* removed Google analytics */