Archive for the ‘Phishing’ Category

Privacy

Friday, December 10th, 2010 by hartmans

I attended a workshop sponsored by the IAB, W3C, ISOC and MIT on Internet Privacy. The workshop had much more of a web focus than it should have: the web is quite important should certainly cover a majority of the time, but backend issues, network issues, and mobile applications are certainly important too. For me this workshop was an excellent place to think about linkability and correlation of information. When people describe attacks such as using the ordered list of fonts installed in a web browser to distinguish one person from another, it’s all too easy to dismiss people who want to solve that attack as the privacy fringe. Who cares if someone knows my IP address or what fonts I use? The problem is that computers are very good at putting data together. If you log into a web site once, and then later come back to that same website, it’s relatively easy to fingerprint your browser and determine that it is the same computer. There’s enough information that even if you use private browsing mode, clear your cookies and move IP addresses, it’s relatively easy to perform this sort of linking.

It’s important to realize that partially fixing this sort of issue will make it take longer to link two things with certainty, but tends not to actually help in the long-run. Consider the font issue. If your browser returns the set of fonts it has in the order they are installed, then that provides a lot of information. Your fingerprint will look the same as people who took the same OS updates, browser updates and installed the same additional fonts in exactly the same order as you. Let’s say that the probability that someone has the same font fingerprint as you is one in a million. For a lot of websites that’s enough that you could very quickly be linked. Sorting the list of fonts reduces the information; in that case, let’s say your probability of having the same font set as someone else is one in a hundred. The website gets much less information from the fonts. However it can combine that information with timing information etc. It can immediately rule out all the people who have a different font profile. However as all the other people who have the same font fingerprint access the website over time, differences between them and you will continue to rule them out until eventually you are left. Obviously this is at a high level. One important high-level note is that you can’t fix these sorts of fingerprinting issues on your own; trying makes things far worse. If you’re the only one whose browser doesn’t give out a font list at all, then it’s really easy to identify you.

The big question in my mind now is how much do we care about this linking. Governments have the technology to do a lot with linking. We don’t have anything we technical we can do to stop them, so we’ll need to handle that with laws. Large companies like Google, Facebook and our ISPs are also in a good position to take significant advantage of linking. Again, though, these companies can be regulated; technology will play a part, especially in telling them what we’re comfortable with and what we’re not, but most users will not need to physically prevent Google and Facebook from linking their data. However smaller websites are under a lot less supervision than the large companies. Unless you take significant steps, such a website can link all your activities on that website. Also, if any group of websites in that space want to share information, they can link across the websites.

I’d like to run thought experiments to understand how bad this is. I’d like to come up with examples of things that people share with small websites but don’t want linked together or alternatively don’t want linked back to their identity. Then look at how this information could be linked. However, I’m having trouble with these thought experiments because I’m just not very privacy minded. I can’t think of something that I share on the web that I wouldn’t link directly to my primary identity. I certainly can’t find anything concrete enough to be able to evaluate how clearly I care to protect it. Helping me out here would be appreciated; if you can think of fairly specific examples. There’s lots of important I prefer to keep private like credit card numbers, but there, it’s not about linking at all. I can reasonably assume that the person I’m giving my credit card number to has a desire to respect my privacy.a

Direction Shift for Web Authentication

Monday, August 18th, 2008 by hartmans

In may of 2006 I published my first draft of requirements for web authentication mechanisms designed to reduce the impact of phishing. Last week I updated the requirements based on some comments I received during last call before going on paternity lea.

However the landscape is different than it was when I started the document in May of 2006. At that time I thought that some significant work needed to be done on channel binding for HTTP. Channel binding is not the only way of meeting my requirements, but I at least think it is the best long-term approach. Several of us contributed to a framework for HTTP channel binding. This spring, Microsoft took a strong interest in HTTP channel binding and started using it for GSS-API, NTLM and digest authentication. Microsoft also convinced us that a number of simplifications can be made and much of the complexity we assumed was needed turns out not to be necessary. Their solution meets all or almost all of my requirements. The good news is that at least within the enterprise,
there appear to be some strong choices for authentication. Microsoft explicitly did not solve the upgrade and negotiation problem: you do not end up knowing whether you are using strong authentication or not.

I don’t think these requirements were ever about asking people to design entirely new authentication mechanisms. I had envisioned that small changes to existing mechanisms in order to enable channel binding or other minor changes would be the sort of work that would result in the IETF. However Microsoft seems to have gone off and done that themselves. Their approach is not on the standards track.

So, I’ve been thinking about whether this document still has a purpose. I ultimately have concluded that it probably does. First, some people seem to be trying to design new authentication mechanisms for HTTP. I’d rather they do it right if they are going to be designed. Secondly, I think there is still work to be done in order to make these mechanisms available on the public Internet. We need to work on how you know if strong mechanisms are available. We also need to work on ways to integrate them into websites outside of closed communities. Nico thinks that an extension to xmlhttprequest is the right approach there. Something is probably needed for DAV and Atom agents to indicate support. Some of that work may belong in the IETF.

It is definitely clear to me now that I do not want to go encourage people to develop more HTTP authentication mechanisms in order to meet these requirements. The Microsoft experience has convinced me that if work is needed on specific mechanisms, it is relatively small and should be a point solution to a bug or lack of negotiation. I definitely do not want someone to point to this document as an argument for entirely new authentication mechanisms. However, I think the current purpose section can be read that way. So, it looks like it’s time for another update to try and make that more clear.

How OpenID may contribute to Phishing

Friday, August 15th, 2008 by hartmans

OpenID provides a single-sign-on solution for websites without requiring browser modifications. The idea is that you can go to an identity provider where you have an account, log in, and then you can go to other websites and point them at your identity provider to validate your ID. You only have to type your password once. It’s very convenient.

OpenID proponents argue that OpenID does not contribute to phishing. It also doesn’t really solve phishing, but the hope is that if we manage to get strong authentication to the identity provider, then that strength will cascade across all the sites that use the identity provider. Some have pointed out that single-sign-on systems provide attractive targets for phishing: once an attacker has your OpenID password they may have access to many more resources. That’s certainly true, but single-sign-on has security benefits in terms of reducing the number of passwords people need and making them less likely to believe that some random server actually has a legitimate need for a password. So, it’s not clear to me how this argument balances out.

However I think there is a bigger contribution from schemes like OpenID to phishing. Even if the authentication to the identity provider is strong, the hand off between the target website and identity provider is weakly authenticated in OpenID. In particular, OpenID depends on TLS certificate validation and correctly going to the right URI to identify the right website. As I discussed earlier this week, the W3C is poised to move us to a world where we admit that self-signed certificates have a place and accept that sometimes we will not have strong authentication when we first go to a site. Unfortunately, because OpenID decouples the authentication to the identity provider from the authentication between the identity provider and website, improvements in the authentication on one side will not increase the security of the overall system. Two attacks are probable. The first is that an attacker might mount a
man-in-the-middle or other attack between the identity provider and target website. Even though the user authenticates strongly to the identity provider, they are left with protections of their eventual authentication no stronger than today’s TLS. The second is that a target website may not actually participate in the authentication exchange. If the website is after capturing your credit card information, they may never forward you back to your identity provider; instead they may just make it appear as if authentication is successful.

I don’t think either of these attacks is particularly interesting today: there are bigger problems. However if we’re successful in strengthening web authentication mechanisms we’ll need to think about how to help folks like OpenID evolve their technology to avoid being the weakest link.

W3C Guidelines for Usable Security Context in Last Call

Wednesday, August 13th, 2008 by hartmans

The Web Security Context working group of the W3C has begun a last call on its User Interface Guidelines. The link is to the version being last called, which may be updated before the recommendation is published. The last call runs until September 15.

I like the approach these recommendations take. They strike a balance between security and usability. One of the controversial changes that they make is they recommend against warnings when you go to a website that is using a self-signed certificate or that chains back to something that you don’t consider a trust anchor. The idea is that a lot of people use self-signed certificates for appliances or within small communities. If you present security warnings in these cases then you reduce the value of all security warnings. This does make it easier to attack a site the first time someone goes to it. Browsers must remember if they have seen a validated certificate for a site; a site that once presented a valid certificate must not present a self-signed certificate in the future.

Another great thing about the recommendation is the handling of errors. Errors are separated into notifications, warnings and danger signals. The main advantage of this separation is that danger signals are used only when there is sufficient evidence that something bad is going on that may put the user at risk. As such, danger signals can be taken seriously.

It’s not clear that everyone will take advantage of these mechanisms, but it seems like a great step in the right direction. This work also aligns well with the authentication requirements work I’ve been doing.

Ietf 70 in Vancouver

Wednesday, December 12th, 2007 by hartmans

Last week, I attended IETF 70 in Vancouver. I had hoped to blog during the week, but the schedule was sufficiently intense that I did not find the time. However, I think that there are a few points that are worth summarizing.

At the TLS session, the topic of using EAP authentication in TLS was discussed. The IDEA is that EAP can be used within a TLS session for authentication either instead of or in addition to certificates. This would allow one-time passwords, SIM or other authentication methods to be used in TLS. In principle this could be very useful. There’s one significant problem: EAP has a fairly strict applicability statement which says that EAP should only be used for network access. There are a lot of reasons you might want this limitation. It’s useful to be able to distinguish EAP from GSS-API and SASL. When should one framework be used and when should another framework be used? Also, EAP is a three-party authentication protocol. The EAP peer authenticates to an EAP server typically by using a passthrough
authenticator
such as an 802.11 access point. However, EAP does not typically verify the identity of the passthrough authenticator. If EAP is only used for network access, this is a manageable risk; Russ Housley argued at the IETF 70 HOKEY session that for service provider uses of EAP, this may not even be a problem. However as the applicability expands, the identity of the party actually taking advantage of the authentication becomes more critical. You don’t want to give someone access to your mail when you thought you were accessing some wireless network. I think that this effort has enough support it is more desirable to figure out how to do it right than to try and stop it. So, the big question is what is necessary to expand the EAP applicability safely.

There was a BOF on web authentication. There seems to be general interest in doing the work although the problem is difficult. I hope to get Nico, Leif and the folks from Mozilla together.

The SASL working group is focusing on a new password mechanism designed to provide authentication and channel binding. We ran into two challenges. The first is that channel binding data may sometimes require confidentiality. Our existing approach for channel binding in SASL does not guarantee that the channel binding provides confidentiality. It also turns out that it is more complex than I had hoped to provide channel binding in a mechanism that is both a GSS-API mechanism and a SASL mechanism. For a GSS-API mechanism you want channel bindings to be included as part of the authentication exchange. However SASL does not take advantage of this but instead uses a wrap token. Ideally you would not need both facilities. I’ll discuss this issue on the list, but I’m not sure there is a good way around this.

HOKEY focused on the role of AAA proxies and EAP. RFC 3962 describes how we want key management for network access to work. Ideally keys would be shared by at most three parties. However real-world networks tend to involve AAA proxies between providers that also know the keys. There is a significant debate ongoing about to what extent we need to provide a solution to work around this problem. There is no ongoing work to meet this requirement. The good news is that the difference in consensus seems to be a lot less than I had originally expected.

I will speak to the Kerberos activities at IETF in a later entry.

New Phishing draft Published

Tuesday, November 20th, 2007 by hartmans

A new version of my phishing draft is out. This draft significantly improves the discussion of the threat model based on comments I’ve received. It also I’ve tried to distinguish between two uses of passwords: passwords as a user interface element and plaintext passwords send as a protocol element. The first is a necessity if we’re going to meet users’ needs; the second must be avoided.

Phishing and UI: Is the Future our Hope

Wednesday, October 24th, 2007 by hartmans

I’ve been working on requirements for web authentication systems that will help us fight Phishing. My current draft is here. Today, if you make the mistake of sending your password to the wrong place on the web, you have compromised your identity until you change that password everywhere. If you disclose your password to some harmless site it may not matter. However if you have been successfully phished, you have a real problem. Imagine if when you accidentally tried to unlock the wrong car, a copy of your car key, license plate and home address got mailed to the owner of the car you mistakenly tried to open. The web doesn’t work quite that way, but it does have one important property in common. Very simple, relatively easy to make mistakes have significant long-term consequences. My hope is that at least in the case of authentication we can move to a model where that’s not the case.

User interface will be critical to any such transition. During a very long transition period, you will have both the current system and the new system in use at the same time. You need to be able to tell which one you are using. In some ways this will be similar to whether the lock icon is present or not. The UI challenges will be the same, although it may be that the meaning of new mode authentication has less subtlety than whether the lock icon implies that your web session is secure.

Ideally UI can be important in establishing that you connected to the right place after the connection. For example you probably recognize your account balance at your bank. If it is radically different or missing, you could choose to go look at why. If you weren’t able to find an explanation, you could be suspicious of the site. Perhaps you were directed to the wrong place and this is not really your bank.

That’s my hope. However, research is grim on the effectiveness of UI in security. (Other papers draw similar conclusions). Users seem to ignore the lock icon almost all the time. Schemes designed to help users determine if they are connected to the right website also fail; even when something is suspicious, users go on and disclose confidential information. The only thing that has significant promise is cases where the browser can present a warning page about security problems.

My proposal is a bit different. In the case of UI clues like account balance and a list of accounts, the UI clues are related to the task the user is actually trying to perform. It may be less likely that unusual events will be ignored. However users may explain them away as system problems or upgrades.

It’s not clear to me that there is much we can do at all if the user response to UI information is as grim as research hints at. However, I have to wonder what the role of education is in improving things. We spend significant time teaching kids to use a library and other important life skills. How much could we do if we taught class segments in online safety? My mostly unfounded belief is that if we had something reasonably easy to teach and understand that we could significantly improve user response. So, a lot of my goal with this project is to think about what would be easy to teach. What we have today clearly is not. I would not want to explain to someone how to tell from certificate information in a browser whether you have the right site. I actually think it might be easier to talk to people about what makes sense related to the tasks they are trying to perform. Is the information that was there last time still there?

I glanced over something really important: if you can present a clear warning that something is wrong then you have a chance of catching the attention of a lot of users. Strong authentication in the context of federations may allow us to do that in a lot of common attack situations. I’ll come back to that in a future post.