Archive for August, 2008

Direction Shift for Web Authentication

Monday, August 18th, 2008 by hartmans

In may of 2006 I published my first draft of requirements for web authentication mechanisms designed to reduce the impact of phishing. Last week I updated the requirements based on some comments I received during last call before going on paternity lea.

However the landscape is different than it was when I started the document in May of 2006. At that time I thought that some significant work needed to be done on channel binding for HTTP. Channel binding is not the only way of meeting my requirements, but I at least think it is the best long-term approach. Several of us contributed to a framework for HTTP channel binding. This spring, Microsoft took a strong interest in HTTP channel binding and started using it for GSS-API, NTLM and digest authentication. Microsoft also convinced us that a number of simplifications can be made and much of the complexity we assumed was needed turns out not to be necessary. Their solution meets all or almost all of my requirements. The good news is that at least within the enterprise,
there appear to be some strong choices for authentication. Microsoft explicitly did not solve the upgrade and negotiation problem: you do not end up knowing whether you are using strong authentication or not.

I don’t think these requirements were ever about asking people to design entirely new authentication mechanisms. I had envisioned that small changes to existing mechanisms in order to enable channel binding or other minor changes would be the sort of work that would result in the IETF. However Microsoft seems to have gone off and done that themselves. Their approach is not on the standards track.

So, I’ve been thinking about whether this document still has a purpose. I ultimately have concluded that it probably does. First, some people seem to be trying to design new authentication mechanisms for HTTP. I’d rather they do it right if they are going to be designed. Secondly, I think there is still work to be done in order to make these mechanisms available on the public Internet. We need to work on how you know if strong mechanisms are available. We also need to work on ways to integrate them into websites outside of closed communities. Nico thinks that an extension to xmlhttprequest is the right approach there. Something is probably needed for DAV and Atom agents to indicate support. Some of that work may belong in the IETF.

It is definitely clear to me now that I do not want to go encourage people to develop more HTTP authentication mechanisms in order to meet these requirements. The Microsoft experience has convinced me that if work is needed on specific mechanisms, it is relatively small and should be a point solution to a bug or lack of negotiation. I definitely do not want someone to point to this document as an argument for entirely new authentication mechanisms. However, I think the current purpose section can be read that way. So, it looks like it’s time for another update to try and make that more clear.

How OpenID may contribute to Phishing

Friday, August 15th, 2008 by hartmans

OpenID provides a single-sign-on solution for websites without requiring browser modifications. The idea is that you can go to an identity provider where you have an account, log in, and then you can go to other websites and point them at your identity provider to validate your ID. You only have to type your password once. It’s very convenient.

OpenID proponents argue that OpenID does not contribute to phishing. It also doesn’t really solve phishing, but the hope is that if we manage to get strong authentication to the identity provider, then that strength will cascade across all the sites that use the identity provider. Some have pointed out that single-sign-on systems provide attractive targets for phishing: once an attacker has your OpenID password they may have access to many more resources. That’s certainly true, but single-sign-on has security benefits in terms of reducing the number of passwords people need and making them less likely to believe that some random server actually has a legitimate need for a password. So, it’s not clear to me how this argument balances out.

However I think there is a bigger contribution from schemes like OpenID to phishing. Even if the authentication to the identity provider is strong, the hand off between the target website and identity provider is weakly authenticated in OpenID. In particular, OpenID depends on TLS certificate validation and correctly going to the right URI to identify the right website. As I discussed earlier this week, the W3C is poised to move us to a world where we admit that self-signed certificates have a place and accept that sometimes we will not have strong authentication when we first go to a site. Unfortunately, because OpenID decouples the authentication to the identity provider from the authentication between the identity provider and website, improvements in the authentication on one side will not increase the security of the overall system. Two attacks are probable. The first is that an attacker might mount a
man-in-the-middle or other attack between the identity provider and target website. Even though the user authenticates strongly to the identity provider, they are left with protections of their eventual authentication no stronger than today’s TLS. The second is that a target website may not actually participate in the authentication exchange. If the website is after capturing your credit card information, they may never forward you back to your identity provider; instead they may just make it appear as if authentication is successful.

I don’t think either of these attacks is particularly interesting today: there are bigger problems. However if we’re successful in strengthening web authentication mechanisms we’ll need to think about how to help folks like OpenID evolve their technology to avoid being the weakest link.

W3C Guidelines for Usable Security Context in Last Call

Wednesday, August 13th, 2008 by hartmans

The Web Security Context working group of the W3C has begun a last call on its User Interface Guidelines. The link is to the version being last called, which may be updated before the recommendation is published. The last call runs until September 15.

I like the approach these recommendations take. They strike a balance between security and usability. One of the controversial changes that they make is they recommend against warnings when you go to a website that is using a self-signed certificate or that chains back to something that you don’t consider a trust anchor. The idea is that a lot of people use self-signed certificates for appliances or within small communities. If you present security warnings in these cases then you reduce the value of all security warnings. This does make it easier to attack a site the first time someone goes to it. Browsers must remember if they have seen a validated certificate for a site; a site that once presented a valid certificate must not present a self-signed certificate in the future.

Another great thing about the recommendation is the handling of errors. Errors are separated into notifications, warnings and danger signals. The main advantage of this separation is that danger signals are used only when there is sufficient evidence that something bad is going on that may put the user at risk. As such, danger signals can be taken seriously.

It’s not clear that everyone will take advantage of these mechanisms, but it seems like a great step in the right direction. This work also aligns well with the authentication requirements work I’ve been doing.

DNS Forgery Threatens Kerberos

Tuesday, August 5th, 2008 by hartmans

DNS Forgery attacks have been in the news recently in a big way: a story in the New York Times said that details of a new DNS attack will be released this week. The basic idea is that it is possible to trick a recursive name server into believing that responses provided by an attacker should be believed instead of responses provided by the real authoritative DNS server. The recursive name server passes this poisoned data along to its clients who use the information to translate names to addresses and for other DNS functions. As the Wikipedia article points out, a particularly effective target for DNS Forgery are authority records in DNS responses. I.E., if the attacker can overwrite the DNS records that specify what name servers should be consulted for a particular domain, then the attacker can capture all future DNS queries for that domain. For example, if an attacker mounted a forgery against Comcast’s name
servers targeting yahoo.com, then the attacker could control what computers all Comcast customers connect to for any yahoo.com names. The details to be released this week are expected to show how such an attack can be mounted in a number of seconds with high reliability; patches are available although there is ongoing discussion about how effective the patches are. It is quite clear that the patches do not fix the problem at a fundamental level: they are believed to make it much less likely that such an attack can be mounted or to increase the time that the attack will take.

Obviously, this attack is of concern for the global Internet. However the Kerberos community should pay particular attention. As we all know, RFC 4120 states that insecure mapping services such as DNS without DNSSec MUST NOT be used to map user input into authentication names. However, as discussed in The Role of Kerberos in Modern Information Systems, non-Microsoft Kerberos implementations use DNS to map names entered by the user into names that are used within Kerberos. So, consider an attacker that mounts a forgery and is able to modify all DNS responses for example.com. If this attacker can take over a single system registered with example.com‘s Kerberos (or learn the Kerberos key of such a system), then they can defeat Kerberos security when authenticating to any system in that Kerberos infrastructure provided that the client uses DNS. There are some core
Kerberos
services such as password changing and the KDC itself that never use DNS in this way. Microsoft implementations also do not depend on this use of DNS. However other implementations tend to use DNS even for relatively sensitive operations such as Ssh used for administrative access to a server. In other words, an easy attack that can be mounted against DNS in a number of seconds is a huge problem for Kerberos. Administrators of Kerberos infrastructure need to insure that DNS server patches are applied in their environments. Hopefully these patches will make the attack hard enough to mount that we have some time to put together a better long-term solution.

We’ve known that this use of DNS is problematic for a long time. We even have better solutions: storing aliases of hosts in KDC databases. I’ve never seen a good solution though to figure out how to get from where we are today to a secure configuration. If you don’t provide a transition strategy, then you will find it difficult to convince users to give up the mode that works in favor of the more secure mode. However at last Tuesday’s Kerberos Working Group meeting, Apple’s Love Hörnquist Ã…strand proposed a solution that I think will work. Love proposed that the client learn from the KDC whether a realm supports KDC aliases and has its database properly populated. If the KDC indicates aliases are available, then the client does not use DNS for mapping. The essential bit I had missed before is that this is a realm-by-realm transition. If my client is going to talk to a particular KDC, the question I care about is whether that KDC supports aliases. I had thought you needed some sort of global
transition in the past. Adopting Love’s proposal will take work, especially surrounding APIs such as krb5_sname_to_princ, but doing this work seems critical.

Painless Security, LLC Formed

Tuesday, August 5th, 2008 by hartmans

I’m pleased to announce that I’m now in business: Painless Security, LLC was formed last week. The lack of a company has not kept me from being busy, but having a company makes it much easier to set up agreements. The standard agreement should be on the website in a couple of days.

Integrating Kerberos into your Application Released

Sunday, August 3rd, 2008 by hartmans

Painless Security has been working with The Interisle Consulting Group and the MIT Kerberos Consortium on the consortium’s paper on how to integrate Kerberos into applications. The paper is now available to the public. The paper gives an overview of GSS-API, SASL and the raw Kerberos messages. It talks about what you hope to get out of integrating Kerberos into an application. Then it discusses several issues to consider when planning your Kerberos integration, including naming, intermediaries and other complicated issues. Finally, the paper points to several examples of application integration.

I think the paper will be useful; I know it covers a lot of issues I have run into over the years. WhenI first heard about the plan for this paper, I expected that it would involve a walk-through of how to integrate Kerberos into some simple application. Other people expected this too: the most consistent comment we’ve received is that there is no tutorial. The paper does point to tutorials for GSS-API in C and Java, but does not include a tutorial ofits own. The reason for this is that it seemed there are already tutorials out there. However there didn’t seem to be an overview to help people choose between SASL and GSS-API, to understand the hard issues and to give best practice advice in avoiding common pitfalls.

I’m very interested in feedback on the paper. I’d especially love to get feedback from those new to the Kerberos community; the comments we’ve received to date are all from people who have been at this for years.