Archive for October, 2007

Back to My Mac: Peer to Peer Kerberos

Wednesday, October 31st, 2007 by hartmans

There’s a new feature in Mac OS X 10.5 called Back to my Mac. It allows you to connect from one mac to another for screen sharing, shared folders or other features . The authentication behind this new feature is Peer to Peer Kerberos. Each Mac runs a local KDC. Each user on the Mac has a Kerberos principal. The Mac generates a realm name starting with LKDC: that contains the hash of a public key created for the machine. A KDC location plugin allows the Mac to find out how to contact the appropriate KDC for one of these peer-to-peer realms. Then, normal Kerberos authentication can take place. The MIT Kerberos team worked with Apple to design this feature. It provided several of UI and security challenges and was an interesting partnership.

This mechanism effectively allows the benefits of Kerberos such as caching of tickets to be used by everyone not just those in an enterprise. Like any other Kerberos authentication, the mechanism can be expanded to support other authentication schemes such as smart cards or authentication tokens in addition to passwords. It makes it easier for programmers because the same security mechanisms can be used both for enterprise security and for peer-to-peer security.

In terms of Kerberos deployment, this is a huge step forward. Apple join the set of companies that are using strong security in consumer-facing products.

The most interesting thing to take away from this is that infrastructure security systems like Kerberos can be easy to use. Users don’t know that they have set up a Kerberos realm or even that they are using Kerberos. I think this will be a good response to claims that good security is too hard to use or deploy. Instead we should focus on writing the necessary user interface to make the security easy to use so that it doesn’t get in the way.

I’ve glossed over an important technical issue. It turns out knowing what realm you need to use in order to contact a particular machine in a secure manner is hard. There are several solutions with different tradeoffs. I don’t know which one Apple ultimately ended up picking. If It is interesting to people I can discuss the design of systems like this and the tradeoffs. Right now, though, I’m focused on the security usability implications of the high level experience. I’d like to think Alexandra Ellwood for research that helped form a basis for this article.

Phishing and UI: Is the Future our Hope

Wednesday, October 24th, 2007 by hartmans

I’ve been working on requirements for web authentication systems that will help us fight Phishing. My current draft is here. Today, if you make the mistake of sending your password to the wrong place on the web, you have compromised your identity until you change that password everywhere. If you disclose your password to some harmless site it may not matter. However if you have been successfully phished, you have a real problem. Imagine if when you accidentally tried to unlock the wrong car, a copy of your car key, license plate and home address got mailed to the owner of the car you mistakenly tried to open. The web doesn’t work quite that way, but it does have one important property in common. Very simple, relatively easy to make mistakes have significant long-term consequences. My hope is that at least in the case of authentication we can move to a model where that’s not the case.

User interface will be critical to any such transition. During a very long transition period, you will have both the current system and the new system in use at the same time. You need to be able to tell which one you are using. In some ways this will be similar to whether the lock icon is present or not. The UI challenges will be the same, although it may be that the meaning of new mode authentication has less subtlety than whether the lock icon implies that your web session is secure.

Ideally UI can be important in establishing that you connected to the right place after the connection. For example you probably recognize your account balance at your bank. If it is radically different or missing, you could choose to go look at why. If you weren’t able to find an explanation, you could be suspicious of the site. Perhaps you were directed to the wrong place and this is not really your bank.

That’s my hope. However, research is grim on the effectiveness of UI in security. (Other papers draw similar conclusions). Users seem to ignore the lock icon almost all the time. Schemes designed to help users determine if they are connected to the right website also fail; even when something is suspicious, users go on and disclose confidential information. The only thing that has significant promise is cases where the browser can present a warning page about security problems.

My proposal is a bit different. In the case of UI clues like account balance and a list of accounts, the UI clues are related to the task the user is actually trying to perform. It may be less likely that unusual events will be ignored. However users may explain them away as system problems or upgrades.

It’s not clear to me that there is much we can do at all if the user response to UI information is as grim as research hints at. However, I have to wonder what the role of education is in improving things. We spend significant time teaching kids to use a library and other important life skills. How much could we do if we taught class segments in online safety? My mostly unfounded belief is that if we had something reasonably easy to teach and understand that we could significantly improve user response. So, a lot of my goal with this project is to think about what would be easy to teach. What we have today clearly is not. I would not want to explain to someone how to tell from certificate information in a browser whether you have the right site. I actually think it might be easier to talk to people about what makes sense related to the tasks they are trying to perform. Is the information that was there last time still there?

I glanced over something really important: if you can present a clear warning that something is wrong then you have a chance of catching the attention of a lot of users. Strong authentication in the context of federations may allow us to do that in a lot of common attack situations. I’ll come back to that in a future post.

Any more Information on Vista Behavior Change

Monday, October 22nd, 2007 by hartmans

According to this article, Vista SP1 changes the behavior of how Windows authenticates when dealing with a remote Kerberos realm. Based on the fix, I’m guessing that the domain_realm mapping behavior has changed. Can anyone guess what’s happening in this situation and whether we care about the issue?

Struggles in Transparency: KFW 3.2.2

Monday, October 22nd, 2007 by hartmans

Last week was an eye-opening experience at least for those of us on the core team. I think we began to really appreciate how much of a shift this is going to be and how many small things were involved.

A lot of our release process is focused around being efficient for a small team. We’re going to need to introduce significant communications in order to make sure people not at MIT understand what is going on and are sufficiently involved in the process. I think the big challenge of this effort will be to find a way to do so without bogging down an already manpower-intensive release process to the point where it does not meet our efficiency goals.

There were a couple of issues that popped up during the KFW 3.2.2 discussion last week. First, a long-standing process has been to give the release engineer flexibility to defer requests to pull specific changes into a point release. The release engineer is responsible for deciding that some change was submitted to the point release too late and will need to wait until the next point release. They make a tradeoff between the value of the fix and the possibility that the fix will break something. There hasn’t previously been a notification of the decision to defer a pull-up request; there has been no need. However we ran into a situation where we needed such a mechanism. We’ve agreed to update our procedures.

MIT has had a long term policy of treating release schedules as confidential. We don’t want to get into a situation where someone is depending on a release coming out by a specific date, we have to slip and they run into trouble. We have worked with specific close partners to learn dependencies on our schedule and where possible we have met those dependencies. We have a good track record of meeting partner dependencies that we’ve committed to. However especially in the case of KFW, this model is inadequate. External contributors need to know when testing needs to happen. Being much more public about release schedules will be important for the consortium and for other external contributors as well. This is proving to be a bit rough to implement. However I think we made good progress on understanding what needs to happen last week; the challenge is to put it into practice for future releases.

Performance, Security and NFS

Wednesday, October 17th, 2007 by hartmans

When I started working on computer security, one of the most common complaints was that it is hard to get good performance with security enabled. You don’t hear that as much today: computers are much faster and hardware acceleration is widely available. However there are some cases where performance and security are still at odds. The NFS community wants to use Remote Direct Memory Access it order to improve performance. Current NFS implementations end up needing to copy NFS data multiple times especially in the receiver. By using RDMA and by offloading TCP to the network interface controller, significant performance advantages can be gained.
However, on the surface this appears to be incompatible with NFS security. The problem is that in order to construct the security data and perform encryption you typically have to copy the data or to perform the same sorts of CPU interactions that are involved in copying. This can be handled by hardware acceleration in some protocols. However NFS uses GSS-API for security. Few if any hardware devices support accelerated handling for GSS-API. Also, efficient hardware operations have not been a requirement for the design of gss-API tokens.

IPsec seems like a good fit for RDMA security. Packets can be decrypted before they are passed to the TCP layer. There is one huge problem. The sort of infrastructure you need to authenticate IPsec is not what you typically use to authenticate NFS. IPsec authentication tends to be per-host not per-user. IPsec doesn’t have a good way to work well with Kerberos. (There are a couple of mechanisms but they have non-ideal components.) One solution would be to have separate authentication at the NFS layer and the IPsec layer. That is very problematic because it increases the cost. Also, the authentications can become mismatched and that decreases security. A huge cost is that two security infrastructures are harder to use than one.

The solution is channel binding. You generate a cryptographic name for the IPsec association. Then, you exchange an integrity protected version of this name over the GSS-API channel. Both ends of the channel confirm that the cryptographic name actually corresponds to the IPsec association. One nice thing about this is that the IPsec-level authentication doesn’t matter. It is important that both ends are the same, but it is not important to tie them to any real-world subjects. The down side of this approach is that it does require more complex interactions between the two layers. The advantage is that it actually provides security that is relatively easy to use.

Channel binding was designed for the NFS usage situation. However as Nico discusses, it can also be applied to other situations such as the web.

Unfortunately the NFS specs being presented for approval side step the problem of security that performs well completely. I think that’s going to be a long discussion.

Opening the Development Process

Monday, October 1st, 2007 by hartmans

MIT Kerberos has largely been developed by a small group of people at any one time. We accept code from outside sources like Sun, Novell and the University of Michigan. However we spend a lot of time making that code fit our standards and design constraints. Few people outside of MIT are involved in setting policy or focusing on the overall architecture of the product beyond the few projects they care about. This needs to change.

At the same time as we were putting together the consortium launch last week, several members of the core team were meeting to discuss how we work with outside contributors. First, it’s clear that we need to get some. We need to interest people outside of MIT in dedicating significant time to working on MIT Kerberos and to caring about the product as a whole rather than just one subsystem or feature. Part of doing this will be offering these people real influence and the ability not to block on MIT to get their work done.

We need to work on opening our processes and establishing clear policies and procedures for decision making. Over the next few weeks I hope to be presenting proposed policies for review. We also need to work on opening up our description of what projects are being worked on and on release processes. MIT and the consortium will control what priorities our staff focus on, but the rest of the community needs to be able to review how we plan to accomplish these tasks and work on tasks of their own.

We came to a few basic decisions at the meetings. First, MIT is not a special customer of MIT Kerberos. We will design a product that is right for all our users. MIT is a customer; we will try to make MIT happy but not at the expense of our other users. We also decided that we need to be careful to make projects available for public review and make sure that projects receive positive support before they are implemented.