Moonshot continues to be busy. Lately we’ve been focusing on finishing our core technical specs, better understanding how Moonshot will be deployed and working on our trust infrastructure. At the same time, we’re beginning to watch organizations evaluate whether Moonshot addresses a need they have. I’m excited by this process because I like to see technology I work on adopted and because the feedback we get is very valuable. This week though, I personally get to participate in such an exercise. Tomorrow I’ll be speaking at the Australian Research Data Storage Initiative’s workshop on Moonshot. I’ll be giving background on the project, talking about community success, and talking about how Moonshot can help Australia. I’m looking forward to that. I’m also very excited about a brainstorming exercise I’ll be participating in today. Several key participants in the RDSI project and I will get together to carefully evaluate their needs and see what it would take for a Moonshot solution. I hope Moonshot does end up being a good fit. Regardless, I enjoy this sort of problem solving session and am happy to have the opportunity to sit down with knowledgeable people and see how we can solve real problems!
I recently put together a reading list on Project Moonshot for a friend. If you have seen discussions of Moonshot but not known where to get started understanding the technology, here is a fairly good initial list. It’s long, but take a look starting at the beginning and let us know what you think.Take a look at
That briefing paper contains outdated versions of the technical
O, yeah, and for the totally cool stuff that is still being designed
It’s been a while since I’ve written about Moonshot. A lot has gone on; we’ve been too busy doing to be busy blogging. However there’s something that’s happened recently that’s so cool I had to take a moment to discuss it. Padl Software, the same people (well person) who brought us LDAP support to replace NIS and the first Active Directory clone, has now produced a GSS-EAP Security Service Provider. That’s software that implements the Moonshot protocol and plugs it into the standard Windows security infrastructure. This is neat because it allows you to use GSS-EAP with unmodified Windows applications like Internet Explorer and Outlook/Exchange. Obviously, this will be great for Moonshot. However, I think the positive affects are more far-reaching than that. Luke has demonstrated that we can evolve the Windows security infrastructure without waiting for Microsoft to lead the way. For those of us working in the enterprise security space, that’s huge. We can innovate and bring our innovation to Windows. In terms of getting acceptance in important user communities, getting funding for work, and making a practical difference, that’s a big deal.
This code is still in the early stages. Padl has not decided how the code will be made available. We don’t know if it will be under an open-source license yet. Luke, naturally wants to get paid for his work. However if this code does get released under an open-source license, it will be very valuable. That will give all of us who are looking for a starting point for security innovations a starting point for bringing our innovations to Windows. Some in the open-source community will argue that we shouldn’t work on improving Windows: if the open-source platforms have features Windows does not, then it may drive people to open-source. Especially for enterprise infrastructure, it tends not to work that way. You need broad cross-platform support to drive new technology. However, it does mean that we can take control of the evolution of our infrastructure; even for Windows there is no requirement that a single vendor controls what is possible.
Computationally it’s sexy for a low-power device. There’s a 1200 mhz arm with 512m of RAM. As a platform to enable people to create novel applications, it’s great. It has multiple USB ports, two ethernets, a built-in 802.11B/G access point, bluetooth, audio, ESATA, Micro SD and full-sized SD. So, whether you application needs storage, networking, audio, or some interesting side device, you’re covered. With the optional JTAG adapter it’s even fairly friendly to developers: there are options for recovering from most failures and full console access. If you’re looking at a reference platform that’s still nominally embedded, but that allows you to play around thinking about what your application could do, this is a great option.
However, especially for Freedom Box, it’s important to remember that the reference board a developer wants is not the same as a cost-reduced product on which to actually deploy something for people to buy. The Dream Plug is wrong for every actual deployment I’ve imagined. First, it’s huge. If you happen to have free power plugs on your wall with lots of space above and below them, it might be an option. If however, you’re like everyone I know and have power strips, constrained spacing, etc, then the industrial engineering will disappoint at every turn. I was hoping for something that kind of looked like an Apple Airport Express. It’s significantly larger than that in every dimension. Also, the plug is oriented the wrong way for minimizing the fraction of the power strip it takes up. The power supply part of the device can be detached, although even that is way too huge for a power strip, and the cable between the power supply and the computer is fairly short. You can reconfigure the device without its plug nature: running a cord from a normal power outlet to the power supply. But then if you have to detach the power supply for heat management reasons, you get a cord from the outlet to the power supply and another cord from the supply to the device. Add a few USB or ESATA devices and the octopus of cables begins to resemble some arcane mechanism. (So far, no elder gods have appeared though.)
The other issue is that you almost never want all the functionality. You pay for the bluetooth, audio and ESATA in terms of cost, heat and space regardless of whether you use them. I don’t have a lot of applications that really take advantage of the full array of hardware.
The firmware update mechanism is decidedly not targeted at end-users. The version I obtained had a hacked copy of Debian lenny. No mechanism was provided for replacing the image in a safe manner that did not potentially require the optional highly-non-end-user-compatible JTAG board if something got interrupted. You could either unscrew the device and get to the micro SD card containing the image, or run software to replace the image from within Debian lenny. It’s possible to configure the boot loader to run an update image off USB or SD until that succeeds, but doing that is also a non-end-user operation.
In conclusion, the name is perfect. This is exactly what hackers need to dream about the power of small computers everywhere. However we must not forget that there’s a step required to turn dreams into reality. Just as with any fully proprietary product, Freedom Box will require cost reduction steps, semi-custom boards and actual OEMs to truly be usable. The claim in the previous sentence that the Freedom Box may have proprietary elements is disquieting to some. I think we can put together a software stack that is free with the possible exception of some device firmware. However, my suspicion is that anyone who turns that into a fully realized end-user product will add proprietary elements. I suspect some of the results of the cost reduction process such as resulting semi-custom boards will be proprietary. In many cases though, I suspect some proprietary software elements will be introduced.
Last fall, Moonshot was steaming forward. We ran into some non-technical obstacles and progress on the implementation was disturbingly quite from the end of October through February. That changed: the code was released February 25.
Since then, the project has picked up the momentum of last fall. There’s a new developers corner with helpful links for participating in the project, obtaining the code, and preparing for our upcoming Second Moonshot Meeting. Standards work in the ABFAB working group has been making steady progress the entire time.
The jabber chat room has been quite active. Developers have been working in three time zones. Whenever In get up there’s likely to be interesting progress awaiting me and new things to work on in the chat logs. Today was no exception. Luke moonshooted jabber. This is exciting: it’s the first tim our code has been used to authenticate some real application instead of a test service. Other discussion from the chat room not reflected in e-mail is equally exciting. He has Moonshot working with OpenSSH in controlled environments. It appears to require some updates to the OpenSSH GSS-API support.
Now is a really great time to get involved in Moonshot. We hope to see you on our lists and in our chat.
With last night’s news, we need to think towards eating our own dogfood and using Moonshot to authenticate to our own Jabber server and to authenticate to our repository for commits. Right now, there are some security issues with the code (lack of EAP channel binding) that might make that undesirable. However in a very small number of weeks or months I expect we will be there!
Sometimes I begin to think that we’ve solved most of the challenges to IPv6 deployment. Then something happens.
This time it was a DAP-1522 access-point. Not a NAT, not a router, just a layer 2 device. A while after deploying the device, I noticed that sometimes mail failed to work. After attempting to debug the problem was that the device wasn’t getting an IPv6 address. The router appeared to be sending out advertizments. Other machines on the same subnet were working fine.
This laptop had associated with the new access point. The default configuration helpfully includes IGMP snooping. The IGMP snooping detected that no one subscribed to any IPv4 multicast group corresponding to the router advertizements and thus didn’t forward them to the wireless link.
We have a long way to go if layer 2 devices sold today are incompatible with v6 in their default configurations.
I attended a workshop sponsored by the IAB, W3C, ISOC and MIT on Internet Privacy. The workshop had much more of a web focus than it should have: the web is quite important should certainly cover a majority of the time, but backend issues, network issues, and mobile applications are certainly important too. For me this workshop was an excellent place to think about linkability and correlation of information. When people describe attacks such as using the ordered list of fonts installed in a web browser to distinguish one person from another, it’s all too easy to dismiss people who want to solve that attack as the privacy fringe. Who cares if someone knows my IP address or what fonts I use? The problem is that computers are very good at putting data together. If you log into a web site once, and then later come back to that same website, it’s relatively easy to fingerprint your browser and determine that it is the same computer. There’s enough information that even if you use private browsing mode, clear your cookies and move IP addresses, it’s relatively easy to perform this sort of linking.
It’s important to realize that partially fixing this sort of issue will make it take longer to link two things with certainty, but tends not to actually help in the long-run. Consider the font issue. If your browser returns the set of fonts it has in the order they are installed, then that provides a lot of information. Your fingerprint will look the same as people who took the same OS updates, browser updates and installed the same additional fonts in exactly the same order as you. Let’s say that the probability that someone has the same font fingerprint as you is one in a million. For a lot of websites that’s enough that you could very quickly be linked. Sorting the list of fonts reduces the information; in that case, let’s say your probability of having the same font set as someone else is one in a hundred. The website gets much less information from the fonts. However it can combine that information with timing information etc. It can immediately rule out all the people who have a different font profile. However as all the other people who have the same font fingerprint access the website over time, differences between them and you will continue to rule them out until eventually you are left. Obviously this is at a high level. One important high-level note is that you can’t fix these sorts of fingerprinting issues on your own; trying makes things far worse. If you’re the only one whose browser doesn’t give out a font list at all, then it’s really easy to identify you.
The big question in my mind now is how much do we care about this linking. Governments have the technology to do a lot with linking. We don’t have anything we technical we can do to stop them, so we’ll need to handle that with laws. Large companies like Google, Facebook and our ISPs are also in a good position to take significant advantage of linking. Again, though, these companies can be regulated; technology will play a part, especially in telling them what we’re comfortable with and what we’re not, but most users will not need to physically prevent Google and Facebook from linking their data. However smaller websites are under a lot less supervision than the large companies. Unless you take significant steps, such a website can link all your activities on that website. Also, if any group of websites in that space want to share information, they can link across the websites.
I’d like to run thought experiments to understand how bad this is. I’d like to come up with examples of things that people share with small websites but don’t want linked together or alternatively don’t want linked back to their identity. Then look at how this information could be linked. However, I’m having trouble with these thought experiments because I’m just not very privacy minded. I can’t think of something that I share on the web that I wouldn’t link directly to my primary identity. I certainly can’t find anything concrete enough to be able to evaluate how clearly I care to protect it. Helping me out here would be appreciated; if you can think of fairly specific examples. There’s lots of important I prefer to keep private like credit card numbers, but there, it’s not about linking at all. I can reasonably assume that the person I’m giving my credit card number to has a desire to respect my privacy.a
Tuesday, MIT Kerberos had a bad hair day—one of those days where you’re looking through your hair and realize that it’s turned to Medusa’s snakes while you weren’t looking. Apparently, since the introduction of RC4, MIT Kerberos has had significant problems handling checksums. Recall that when Kerberos talks about checksums it’s conflating two things: unkeyed checksums like SHA-1 and message authentication codes like HMAC-SHA1 used with an AES key derivation. The protocol doesn’t have a well defined concept of an unkeyed checksum, although it does have the concept of checksums like CRC32 that ignore their keys and can be modified by an attacker. One way of looking at it is that checksums were over-abstracted and generalized. Around the time that 3DES was introduced, there was a belief that we’d have a generalized mechanism for introducing new crypto systems. By the time RFC 3961 actually got written, we’d realized that we could not abstract things quite as far as we’d done for 3DES. The code however was written as part of adding 3DES support.
There are two major classes of problem. The first is that the 3DES (and I believe AES) checksums don’t actually depend on the crypto system: they’re just HMACs. They do end up needing to perform encryption operations as part of key derivation. However the code permitted these checksums to be used with any key, not just the kind of key that was intended. In a nice abstract way, the operations of the crypto system associated with the key were used rather than those of the crypto system loosely associated with the checksum. I guess that’s good: feeding a 128-bit key into 3DES might kind of confuse 3DES which expects a 168-bit key. On the other hand, RC4 has a block size of 1 because it is a stream cipher. For various reasons, that means that regardless of what RC4 key you start with, if you use the 3DES checksum with that key, there are only 256 possible outpus for the HMAC. Sadly, that’s not a lot of work for the attacker. To make matters worse, one of the common interfaces for choosing the right checksum to use was to enumerate through the set of available checksums and pick the first one that would accept the kind of key in question. Unfortunately, 3DES came before RC4 and there are some cases where the wrong checksum would be used.
Another serious set of problems stems from the handling of unkeyed checksums. It’s important to check and make sure that a received checksum is keyed if you are in a context where an attacker could have modified it. Using an md5 outside of encrypted text to integrity protect a message doesn’t make sense. Some of the code was not good about checking this.
What worries me most about this set of issues is how many new vulnerabilities were introduced recently. The set of things you can do with 1.6 based on these errors was significant, but not nearly as impressive as 1.7. A whole new set of attacks were added for the 1.8 release. In my mind, the most serious attack was added for the 1.7 release. A remote attacker can send an integrity-protected GSS-API token using an unkeyed checksum. Since there’s no key the attacker doesn’t need to worry about not knowing it. However the checksum verifies, and the code is happy to go forward.
I think we need to take a close look at how we got here and what went wrong. The fact that multiple future releases made the problem worse made it clear that we produced a set of APIs where doing the worng thing is easier than doing the right thing. It seems like there is something important to fix here about our existing APIs and documentation. It might be possible to add tests or things to look for when adding new crypto systems. However I also think there is an important lesson to take away at a design level. Right now I don’t know what the answers our, but I encourage the community to think closely about this issue.
I’m speaking about MIT Kerberos because I’m familiar with the details there. However it’s my understanding that the entire Kerberos community has been thinking about checksums lately, and MIT is not the only implementation with improvements to make here.
At the end of September, things were quite exciting as we had our first project meeting. At that meeting those in the room saw a demonstration of the Moonshot GSS EAP mechanism and we discussed a number of open issues and began to plan for our test infrastructure. We’ve made significant progress on the specification front and on explaining Moonshot to important communities since then. However there has been little public progress on the implementation front.
Unfortunately, getting the necessary legal clearance and agreements to release code often takes longer than anyone would like; that is what is happening here. We’re all eagerly awaiting final approval from the lawyers and JANET(UK) management. However, things have been moving behind the scenes. Throughout much of October, Luke Howard and Linus Nordberg were working on their respective parts of the code.
I’ve also been working on putting together the test and build infrastructure. As we discussed at the meeting, we’re going to use Debian and Ubuntu as the basis for our testing. For example, we hope to release virtual machine images for these platforms for the major Moonshot components. Thus the primary build environment for our testing and virtualization will be for Debian. I’ve been putting together that here. Right now, that branch will pull together packages of the SAML infrastructure that we need. I’ve also been looking into virtualized test frameworks and believe I’ve found one that meets our needs. I’ve also put together some primitive build infrastructure that is independent of packaging available here. I’ve set up a buildbot that builds both environments. So, as the code becomes available we’ll be in a good position to start making it available.
The ABFAB working group, which will be standardizing technologies that Moonshot depends on, had its first meeting at IETF 79 in Beijing, China. The meeting was quite productive. Because the meeting was the first of the working group, there were some introductory presentations. A group of authors are putting together a proposed architecture document; we presented the current state of our work. However things have evolved significantly since the working group meeting and I think it will make more sense to wait a couple of weeks to discuss the architecture document.
Most of the time was spent on two presentations. The first was the status of the GSS mechanism. We discussed issues that were discovered while implementing the EAP GSS-API mechanism. Discussion in the room tended to support the proposals made in the slides. A few issues will need to come to the list. We had the most interesting discussion of SAML AAA integration.
Minutes are available.