Guess it’s time to wait for OAuth 1.1

Last week, I decided to take a stab at implementing a basic OAuth consumer in order to integrate Twitter Karma with Twitter using OAuth. I’d read through the OAuth 1.0 specification before, but never closely enough to realize that there was a serious attack vector in it. So, I mentioned this on the twitter-development-talk mailing list, hoping to get some answers on April 16th:

Also, the redirect to the callback URL has no signature. What stops an attacker from brute-force attacking an OAuth consumer, iterating through posisble tokens? Simply the large search space of valid OAuth tokens? Even if it’s only “possible in theory” … some teenager with nothing better to do is going to eventually turn that theory into practice.

My concern was met with confusion so I brought the concern to the OAuth mailing list (also) on April 17th:

Currently, the OAuth callback URL is susceptible to replay attack and token shooting. Signing it would eliminate this in a very low-effort way.

[…] it’s very desirable to be able to tell if the callback was legitimate or either a replay attack or a brute-force token shooting attack.

Even client-side browser cookies may not win here if a simple session fixation attack is coupled with the token shooting attack.

I wasn’t sure if I was being understood, either, but then a few days later on April 20th, Twitter yanked their OAuth support offline.

Eventually, an OAuth security advisory 2009-1 was published on April 23rd that clearly states “A modified specification to address this issue is forthcoming” as I pointed out was necessary, and Eran Hammer-Lahav writes a thorough explanation of the problem in his blog. The money quote:

If we put it all together, even if an application:

  • Makes sure to use the account of the user coming back from the callback and not that of the one who started
  • Checks that the user who starts the process is the same as the one who finishes it
  • Uses an immutable callback set at registration

It is still exposed to a timing attack trying to slide in between the victim authorizing access and his browser making the redirection call. There is simply no way to know that the person who authorizes is the same person coming back. None whatsoever.

This is exactly why the callback URL that the OAuth provider uses to redirect the user back to the OAuth consumer needs to be signed.

So, why am I making a big deal out of all of this? Well, I’m irked. Coincidentally, their “discovery” of this issue conveniently coincided with my raising of the issue publically on the mailing lists, yet nowhere is my name mentioned in any of this. In my opinion, that’s really poor form in an “open community” where often the only compensation for effort is recognition. It’s this kind of back-handed treatment that encourages people to demonstrate these security concerns through actual working exploits, which benefits no one but guarantees the creator their recognition.

In the end, I’m glad that this issue is going to be addressed and hopefully in a timely fashion because all of us developing applications in the public software ecosystem will benefit, but I still feel slighted. I’ll get over it, while I wait for the OAuth 1.1 specification.

Tags: , ,

Comments

  1. Here’s an idea that might further mitigate this problem: Suppose the (trusted) consumer sends some information about the logged in user along when requesting the request token. For a site with usernames, it might be the username of the user that request authorization. The OAuth provider can then display this information like this:

    When you click “Allow”, the account foo (with email address foo@bar.com) on Twitter Karma will be able to read and write to your Twitter account…

    This is not foolproof because the attacker may be able to create a username that’s similar to the user’s existing account on the consumer site. But it might increase the probability that the user is aware of an attack.

  2. No conspiracy here (and its not like anyone is taking credit for this at anyone else’s expense).

    I can guarantee to you that, as the person who has been coordinating this effort from the beginning, it was not your post that triggered this. I am not trying to take anything away from your discovery, but it was not what got us working on this threat.

    The Twitter use case brought increased attention to this area of the protocol, and the threat was identified by multiple people. In fact most aspects of this have been written about as early as last November (I was only made aware of that this week), but only last week did people start putting a complete attack together.

    I truly hope that you will become part of the OAuth community, working with us on future version of the spec. You obviously have a lot to contribute.

  3. Eran: Thanks for stopping by and commenting!

    I know there’s no big conspiracy going on here, and I would expect that another security-minded person who takes a closer look at OAuth 1.0 like I did is likely to find the same issues. It’s just the synchronicity of the events that really leaves me feeling sore.

    I’m a big supporter of open specifications – I tried to encourage AOL to become an OpenID provider and consumer during my short tenure there – and with my position as a developer of third-party Twitter applications, expect to continue to deal with OAuth as it evolves.

    So, who needs to teach me the OAuth club’s secret handshake before I can be included in these closed-doors security discussions regarding the specification? Clearly, discussing them on public lists just doesn’t happen – so, what’s the protocol?

  4. There really isn’t any secret handshake.

    The closed-door discussions were not about a new specification. They were about helping each other with live OAuth services. We had to decide how many people to expose this threat to, once it was official that there was a threat (which was a new recognition on Friday). We did have some talks about potential solutions but no one even suggested a vote or trying to make decisions. The discussion has been shut down last night when it became fully public.

    So the secret club is closed. You can quote me on this! The list is only there in case we discovery actual attacks or another related threat and need to coordinate a response to the threat. But no specification plans are going on on that list. No one posted anything there since we went public.

    I was only made aware of your discovery today, here. I have learned about this threat on Friday from someone else who came to a similar conclusion independently. Note that we didn’t take or give credit.

    If you want to participate, join the public mailing list and, well, participate. I will probably distill all the ideas posted to the list into a proposal over the weekend and post it as a draft for review. That’s what I do, I edit.

    At that point it will be open for review and discussion and we will make decision using rough consensus and a bit of benevolent dictatorship if consensus doesn’t happen (which is very rare). This is how we got to where we are today and the full history is public for anyone to read through.

    I hope this addresses all your concerns. If not, you know where to find me.

  5. Eran: Thanks for not taking my sense of humor too seriously. I truly appreciate it beyond words I can find to describe it.

Speak Your Mind

*