Debian: Init Interfaces–Our Users *and* Free Software

February 8th, 2014 by hartmans

Recently, I’ve been watching the Debian Schadenfreude related to init systems. For those who do not know, Debian is trying to decide whether Systemd or Upstart will be used to start software on Debian. There are two other options, although I think Systemd and Upstart are the main contenders. Currently the Technical Committee is considering the issue, although there’s some very real chance that the entire project will get dragged through this swamp with the general resolution process. This is one of those discussions that proves that Debian truly is a community: if we didn’t have something really strong holding us together we’d never put up with something this painful. Between the accusations that our users now know the persecution European pagans faced at the hands of Christians as the Systemd priests drive all before them, a heated discussion of how the Debian voting system interacts with this issue, two failed calls for votes, a discussion of conflicts of interests, and a last-minute discussion of whether the matter had been appropriately brought before the Technical committee (and if so, under what authority), there’s certainly schadenfreude to go around if you’re into that sort of thing.

However, through all this, the Technical Committee has been doing great work understanding the issues; it has been a pleasure to watch their hard work from the side lines. I’d like to focus on one key question they’ve found: how tightly can other software depend on the init system. Each init system offers some nice features. Upstart has an event triggering model. Systemd manages login sessions and at least in my opinion has a really nice service file format. Can I take advantage of these in my software. If I do, then users of my software might need to use a particular init system. Ian Jackson argues that we should give our users the choice of what init system to run. He reminds us that Debian is a community that supports diversity and diverse goals. We support multiple desktop systems, web browsers, etc. Wherever we can, we support people working on their goals and give developers the opportunity to make it work. When developers succeed, we give our users the opportunity to choose from among the options that have been developed.

I think of Ian’s argument as an appeal to part of our Social Contract. Clause 4 of the social contract begins:

Our priorities are our users and free software.
We will be guided by the needs of our users and the free software community. We will place their interests first in our priorities. We will support the needs of our users for operation in many different kinds of computing environments.

Ian is right that our users will be served by choice in init systems. However, this trade off needs to be balanced against the needs of the free software community. Diversity is an important goal but it should not come at the price of stagnation and innovation. If I want to avoid using init scripts because they don’t provide restart on failure, because they are hard to write correctly, and because they don’t provide access to modern security isolation techniques, I should be able to do that. If Systemd service files provide a superior solution, I should be able to work toward using them. If the desire for init system diversity shuts down my ability to find like-minded people who want to take advantage of improvements in init systems and work towards that goal, then we’ve significantly damaged Debian as a forum for technical cooperation.

Early in my work as a Debian Developer, Anthony Towns taught me an interesting principle. Those who do the work get significant influence over how and what gets done. Ian’s right that we need to enable people to work towards choice in init systems. Those like-minded people need to be given a chance to find each other and pursue their goal. However the cost of success should be on the shoulders of those who value choice in init systems. It should not come at the cost of preventing people from depending on improvements in init systems.

The best proposal I’ve seen for balancing this is to enumerate stable interfaces that people want to use. Things like the logind interface or my favorite the service file interface. It needs to be possible to make another implementation of the interface. The interface needs to be stable enough that a dedicated team could have a chance of catching up with an implementation. At some point during the release cycle such interfaces would need to be frozen. However, I don’t think it is reasonable to mandate that there are multiple implementations of the interface, only that there could be. The point is to give people a chance to work towards diversity in init systems, not to force it down peoples’ throats kicking and screaming until they leave the project or ignore our processes. Steve Langasek and Colin Watson seem to be working towards this.

It’s possible there’s another approach besides interfaces. My main concern is the same as Ian’s: maintain Debian as a forum for people to pursue their goals and work together. I suspect we see the conflict in these goals differently. I hope that we as a project can explore this conflict and figure out where we have common ground. I commit to exploring how we can work towards init choice in my frameworks; I ask those who prioritize init choice to commit to explore how we can take advantage of new features in their framework.

IETF Takes on Challenge of Internet Monitoring

November 6th, 2013 by hartmans

At IETF 88, we held a plenary discussion of how we could harden the Internet against ongoing monitoring and survalence. There were no significant surprises in what people said about monitoring. So, we had an opportunity to focus on what the IETF as a standards organization responsible for technical standards that drive the Internet can do about this problem. I think we made amazing progress.

The IETF works by consensus. We discuss issues, and see if there’s some position that can gain a rough consensus of the participants in the discussion. After a couple of hours of discussion, Russ Housley asked several consensus questions. The sense of the room is that we should treat these incidents of monitoring as an attack and include them in the threats we try and counter with security features in our protocols. The room believes that encryption, even without authentication has value in fighting these attacks. There was support for the idea of end-to-end encryption is valuable even when there are middle boxes. IETF decisions made in meetings are confirmed on public mailing lists, so the sense of the room is not final. Also, note that I did not capture the exact wording of the questions that were posed.

This is huge. There is very strong organizational agreement that we’re going to take work in this space as seriously. Now that we’ve decided pervasive monitoring is an attack, anyone can ask how a proposed protocol (or change to a protocol) counters that attack. If it doesn’t handle the attack and there is a way to address the attack, then we will be in a stronger position arguing the threat could be addressed. In addition, the commitment to encryption providing value without authentication will be useful in providing privacy and minimizing fingerprinting by passive attackers.

The IETF is only one part of the solution. We can work on the standards that describe how systems interact. However, implementations, policy makers, operators and users will also play a role in securing the Internet against pervasive attacks.

Moonshot and RDSI

April 15th, 2012 by hartmans

Moonshot continues to be busy. Lately we’ve been focusing on finishing our core technical specs, better understanding how Moonshot will be deployed and working on our trust infrastructure. At the same time, we’re beginning to watch organizations evaluate whether Moonshot addresses a need they have. I’m excited by this process because I like to see technology I work on adopted and because the feedback we get is very valuable. This week though, I personally get to participate in such an exercise. Tomorrow I’ll be speaking at the Australian Research Data Storage Initiative’s workshop on Moonshot. I’ll be giving background on the project, talking about community success, and talking about how Moonshot can help Australia. I’m looking forward to that. I’m also very excited about a brainstorming exercise I’ll be participating in today. Several key participants in the RDSI project and I will get together to carefully evaluate their needs and see what it would take for a Moonshot solution. I hope Moonshot does end up being a good fit. Regardless, I enjoy this sort of problem solving session and am happy to have the opportunity to sit down with knowledgeable people and see how we can solve real problems!

Moonshot Introduction

December 6th, 2011 by hartmans

I recently put together a reading list on Project Moonshot for a friend. If you have seen discussions of Moonshot but not known where to get started understanding the technology, here is a fairly good initial list. It’s long, but take a look starting at the beginning and let us know what you think.Take a look at

http://www.project-moonshot.org/.

Specifically,

http://www.project-moonshot.org/sites/default/files/moonshot-feasibility-analysis.pdf

and

http://www.project-moonshot.org/sites/default/files/moonshot-briefing-ietf-78.pdf

That briefing paper contains outdated versions of the technical
specifications.
Please see

http://tools.ietf.org/html/draft-ietf-abfab-arch-00

http://tools.ietf.org/html/draft-ietf-abfab-gss-eap

and http://tools.ietf.org/html/draft-ietf-abfab-gss-eap-naming
and

http://tools.ietf.org/html/draft-ietf-abfab-aaa-saml

O, yeah, and for the totally cool stuff that is still being designed
please see

http://tools.ietf.org/html/draft-mrw-abfab-multihop-fed

and http://tools.ietf.org/html/draft-mrw-abfab-trust-router

Moonshot SSP

October 12th, 2011 by hartmans

It’s been a while since I’ve written about Moonshot. A lot has gone on; we’ve been too busy doing to be busy blogging. However there’s something that’s happened recently that’s so cool I had to take a moment to discuss it. Padl Software, the same people (well person) who brought us LDAP support to replace NIS and the first Active Directory clone, has now produced a GSS-EAP Security Service Provider. That’s software that implements the Moonshot protocol and plugs it into the standard Windows security infrastructure. This is neat because it allows you to use GSS-EAP with unmodified Windows applications like Internet Explorer and Outlook/Exchange. Obviously, this will be great for Moonshot. However, I think the positive affects are more far-reaching than that. Luke has demonstrated that we can evolve the Windows security infrastructure without waiting for Microsoft to lead the way. For those of us working in the enterprise security space, that’s huge. We can innovate and bring our innovation to Windows. In terms of getting acceptance in important user communities, getting funding for work, and making a practical difference, that’s a big deal.

This code is still in the early stages. Padl has not decided how the code will be made available. We don’t know if it will be under an open-source license yet. Luke, naturally wants to get paid for his work. However if this code does get released under an open-source license, it will be very valuable. That will give all of us who are looking for a starting point for security innovations a starting point for bringing our innovations to Windows. Some in the open-source community will argue that we shouldn’t work on improving Windows: if the open-source platforms have features Windows does not, then it may drive people to open-source. Especially for enterprise infrastructure, it tends not to work that way. You need broad cross-platform support to drive new technology. However, it does mean that we can take control of the evolution of our infrastructure; even for Windows there is no requirement that a single vendor controls what is possible.

Dream Plug

June 27th, 2011 by hartmans

As part of trying to help with Freedom Box, I gained access to a Dream Plug The Dream Plug is one of the leading possible platforms for Freedom Box.

Computationally it’s sexy for a low-power device. There’s a 1200 mhz arm with 512m of RAM. As a platform to enable people to create novel applications, it’s great. It has multiple USB ports, two ethernets, a built-in 802.11B/G access point, bluetooth, audio, ESATA, Micro SD and full-sized SD. So, whether you application needs storage, networking, audio, or some interesting side device, you’re covered. With the optional JTAG adapter it’s even fairly friendly to developers: there are options for recovering from most failures and full console access. If you’re looking at a reference platform that’s still nominally embedded, but that allows you to play around thinking about what your application could do, this is a great option.

However, especially for Freedom Box, it’s important to remember that the reference board a developer wants is not the same as a cost-reduced product on which to actually deploy something for people to buy. The Dream Plug is wrong for every actual deployment I’ve imagined. First, it’s huge. If you happen to have free power plugs on your wall with lots of space above and below them, it might be an option. If however, you’re like everyone I know and have power strips, constrained spacing, etc, then the industrial engineering will disappoint at every turn. I was hoping for something that kind of looked like an Apple Airport Express. It’s significantly larger than that in every dimension. Also, the plug is oriented the wrong way for minimizing the fraction of the power strip it takes up. The power supply part of the device can be detached, although even that is way too huge for a power strip, and the cable between the power supply and the computer is fairly short. You can reconfigure the device without its plug nature: running a cord from a normal power outlet to the power supply. But then if you have to detach the power supply for heat management reasons, you get a cord from the outlet to the power supply and another cord from the supply to the device. Add a few USB or ESATA devices and the octopus of cables begins to resemble some arcane mechanism. (So far, no elder gods have appeared though.)

The other issue is that you almost never want all the functionality. You pay for the bluetooth, audio and ESATA in terms of cost, heat and space regardless of whether you use them. I don’t have a lot of applications that really take advantage of the full array of hardware.

The firmware update mechanism is decidedly not targeted at end-users. The version I obtained had a hacked copy of Debian lenny. No mechanism was provided for replacing the image in a safe manner that did not potentially require the optional highly-non-end-user-compatible JTAG board if something got interrupted. You could either unscrew the device and get to the micro SD card containing the image, or run software to replace the image from within Debian lenny. It’s possible to configure the boot loader to run an update image off USB or SD until that succeeds, but doing that is also a non-end-user operation.

In conclusion, the name is perfect. This is exactly what hackers need to dream about the power of small computers everywhere. However we must not forget that there’s a step required to turn dreams into reality. Just as with any fully proprietary product, Freedom Box will require cost reduction steps, semi-custom boards and actual OEMs to truly be usable. The claim in the previous sentence that the Freedom Box may have proprietary elements is disquieting to some. I think we can put together a software stack that is free with the possible exception of some device firmware. However, my suspicion is that anyone who turns that into a fully realized end-user product will add proprietary elements. I suspect some of the results of the cost reduction process such as resulting semi-custom boards will be proprietary. In many cases though, I suspect some proprietary software elements will be introduced.

Moonshooting Jabber

March 15th, 2011 by hartmans

Last fall, Moonshot was steaming forward. We ran into some non-technical obstacles and progress on the implementation was disturbingly quite from the end of October through February. That changed: the code was released February 25.

Since then, the project has picked up the momentum of last fall. There’s a new developers corner with helpful links for participating in the project, obtaining the code, and preparing for our upcoming Second Moonshot Meeting. Standards work in the ABFAB working group has been making steady progress the entire time.

The jabber chat room has been quite active. Developers have been working in three time zones. Whenever In get up there’s likely to be interesting progress awaiting me and new things to work on in the chat logs. Today was no exception. Luke moonshooted jabber. This is exciting: it’s the first tim our code has been used to authenticate some real application instead of a test service. Other discussion from the chat room not reflected in e-mail is equally exciting. He has Moonshot working with OpenSSH in controlled environments. It appears to require some updates to the OpenSSH GSS-API support.

Now is a really great time to get involved in Moonshot. We hope to see you on our lists and in our chat.

With last night’s news, we need to think towards eating our own dogfood and using Moonshot to authenticate to our own Jabber server and to authenticate to our repository for commits. Right now, there are some security issues with the code (lack of EAP channel binding) that might make that undesirable. However in a very small number of weeks or months I expect we will be there!

V6 Really is that Hard

March 8th, 2011 by hartmans

Sometimes I begin to think that we’ve solved most of the challenges to IPv6 deployment. Then something happens.

This time it was a DAP-1522 access-point. Not a NAT, not a router, just a layer 2 device. A while after deploying the device, I noticed that sometimes mail failed to work. After attempting to debug the problem was that the device wasn’t getting an IPv6 address. The router appeared to be sending out advertizments. Other machines on the same subnet were working fine.

This laptop had associated with the new access point. The default configuration helpfully includes IGMP snooping. The IGMP snooping detected that no one subscribed to any IPv4 multicast group corresponding to the router advertizements and thus didn’t forward them to the wireless link.

We have a long way to go if layer 2 devices sold today are incompatible with v6 in their default configurations.

Privacy

December 10th, 2010 by hartmans

I attended a workshop sponsored by the IAB, W3C, ISOC and MIT on Internet Privacy. The workshop had much more of a web focus than it should have: the web is quite important should certainly cover a majority of the time, but backend issues, network issues, and mobile applications are certainly important too. For me this workshop was an excellent place to think about linkability and correlation of information. When people describe attacks such as using the ordered list of fonts installed in a web browser to distinguish one person from another, it’s all too easy to dismiss people who want to solve that attack as the privacy fringe. Who cares if someone knows my IP address or what fonts I use? The problem is that computers are very good at putting data together. If you log into a web site once, and then later come back to that same website, it’s relatively easy to fingerprint your browser and determine that it is the same computer. There’s enough information that even if you use private browsing mode, clear your cookies and move IP addresses, it’s relatively easy to perform this sort of linking.

It’s important to realize that partially fixing this sort of issue will make it take longer to link two things with certainty, but tends not to actually help in the long-run. Consider the font issue. If your browser returns the set of fonts it has in the order they are installed, then that provides a lot of information. Your fingerprint will look the same as people who took the same OS updates, browser updates and installed the same additional fonts in exactly the same order as you. Let’s say that the probability that someone has the same font fingerprint as you is one in a million. For a lot of websites that’s enough that you could very quickly be linked. Sorting the list of fonts reduces the information; in that case, let’s say your probability of having the same font set as someone else is one in a hundred. The website gets much less information from the fonts. However it can combine that information with timing information etc. It can immediately rule out all the people who have a different font profile. However as all the other people who have the same font fingerprint access the website over time, differences between them and you will continue to rule them out until eventually you are left. Obviously this is at a high level. One important high-level note is that you can’t fix these sorts of fingerprinting issues on your own; trying makes things far worse. If you’re the only one whose browser doesn’t give out a font list at all, then it’s really easy to identify you.

The big question in my mind now is how much do we care about this linking. Governments have the technology to do a lot with linking. We don’t have anything we technical we can do to stop them, so we’ll need to handle that with laws. Large companies like Google, Facebook and our ISPs are also in a good position to take significant advantage of linking. Again, though, these companies can be regulated; technology will play a part, especially in telling them what we’re comfortable with and what we’re not, but most users will not need to physically prevent Google and Facebook from linking their data. However smaller websites are under a lot less supervision than the large companies. Unless you take significant steps, such a website can link all your activities on that website. Also, if any group of websites in that space want to share information, they can link across the websites.

I’d like to run thought experiments to understand how bad this is. I’d like to come up with examples of things that people share with small websites but don’t want linked together or alternatively don’t want linked back to their identity. Then look at how this information could be linked. However, I’m having trouble with these thought experiments because I’m just not very privacy minded. I can’t think of something that I share on the web that I wouldn’t link directly to my primary identity. I certainly can’t find anything concrete enough to be able to evaluate how clearly I care to protect it. Helping me out here would be appreciated; if you can think of fairly specific examples. There’s lots of important I prefer to keep private like credit card numbers, but there, it’s not about linking at all. I can reasonably assume that the person I’m giving my credit card number to has a desire to respect my privacy.a

Bad Hair Day for Kerberos

December 3rd, 2010 by hartmans

Tuesday, MIT Kerberos had a bad hair day—one of those days where you’re looking through your hair and realize that it’s turned to Medusa’s snakes while you weren’t looking. Apparently, since the introduction of RC4, MIT Kerberos has had significant problems handling checksums. Recall that when Kerberos talks about checksums it’s conflating two things: unkeyed checksums like SHA-1 and message authentication codes like HMAC-SHA1 used with an AES key derivation. The protocol doesn’t have a well defined concept of an unkeyed checksum, although it does have the concept of checksums like CRC32 that ignore their keys and can be modified by an attacker. One way of looking at it is that checksums were over-abstracted and generalized. Around the time that 3DES was introduced, there was a belief that we’d have a generalized mechanism for introducing new crypto systems. By the time RFC 3961 actually got written, we’d realized that we could not abstract things quite as far as we’d done for 3DES. The code however was written as part of adding 3DES support.

There are two major classes of problem. The first is that the 3DES (and I believe AES) checksums don’t actually depend on the crypto system: they’re just HMACs. They do end up needing to perform encryption operations as part of key derivation. However the code permitted these checksums to be used with any key, not just the kind of key that was intended. In a nice abstract way, the operations of the crypto system associated with the key were used rather than those of the crypto system loosely associated with the checksum. I guess that’s good: feeding a 128-bit key into 3DES might kind of confuse 3DES which expects a 168-bit key. On the other hand, RC4 has a block size of 1 because it is a stream cipher. For various reasons, that means that regardless of what RC4 key you start with, if you use the 3DES checksum with that key, there are only 256 possible outpus for the HMAC. Sadly, that’s not a lot of work for the attacker. To make matters worse, one of the common interfaces for choosing the right checksum to use was to enumerate through the set of available checksums and pick the first one that would accept the kind of key in question. Unfortunately, 3DES came before RC4 and there are some cases where the wrong checksum would be used.

Another serious set of problems stems from the handling of unkeyed checksums. It’s important to check and make sure that a received checksum is keyed if you are in a context where an attacker could have modified it. Using an md5 outside of encrypted text to integrity protect a message doesn’t make sense. Some of the code was not good about checking this.

What worries me most about this set of issues is how many new vulnerabilities were introduced recently. The set of things you can do with 1.6 based on these errors was significant, but not nearly as impressive as 1.7. A whole new set of attacks were added for the 1.8 release. In my mind, the most serious attack was added for the 1.7 release. A remote attacker can send an integrity-protected GSS-API token using an unkeyed checksum. Since there’s no key the attacker doesn’t need to worry about not knowing it. However the checksum verifies, and the code is happy to go forward.

I think we need to take a close look at how we got here and what went wrong. The fact that multiple future releases made the problem worse made it clear that we produced a set of APIs where doing the worng thing is easier than doing the right thing. It seems like there is something important to fix here about our existing APIs and documentation. It might be possible to add tests or things to look for when adding new crypto systems. However I also think there is an important lesson to take away at a design level. Right now I don’t know what the answers our, but I encourage the community to think closely about this issue.

I’m speaking about MIT Kerberos because I’m familiar with the details there. However it’s my understanding that the entire Kerberos community has been thinking about checksums lately, and MIT is not the only implementation with improvements to make here.