Skip to content

How MembershipReboot stores passwords properly

February 9, 2014

I’m not going to go into all of the motivation behind proper password hashing — Troy’s done an excellent job of it and he has said it all better than I ever could have. The short version is that we assume that an attacker will eventually compromise the database and that we need to store passwords in a way to make it very hard for attackers to then extract the stored passwords. This leads to the modern approach to storing passwords which is done with a “password stretching” algorithm where you salt and hash in a loop for tens of thousands of iterations. The general consensus is that it should take about one second to compute a password hash. The number of iterations to arrive at one second is hardware dependent. One issue is that hardware gets better over time, so the number of iterations should adjust to account for that. A guideline to determine the iterations is that in the year 2000 an application should have used 1000 iterations and for each 2 years after the iteration count should be doubled. This means in 2012 we should have been using 64000 iterations and in 2014 we should be using 128000 iterations. As previously mentioned, this is hardware dependent and the real target is 500 to 1000 milliseconds. To help determine the right iterations for your hardware I have an utility here.

I really believe that this is the right approach to password storage. I also believe that if we can do a better job then we should (especially when it’s fairly easy to do). This was one of my motivations with MembershipReboot. I was upset that Microsoft wasn’t providing a modern implementation for password storage. Microsoft’s implementations are hard coded to use 1000 iterations. This is true *even* with their most recent identity management library “ASP.NET Identity”. This is a far cry from the current recommendations of tens of thousands of iterations.

Now, I must admit, Microsoft has opened the door slightly with ASP.NET Identity because the password hashing algorithm is configurable. This happens to be yet another complaint I have — Microsoft did not provide a configurable iteration count; they provided a configurable algorithm. While extensibility is good, this actually sucks because an application developer should not have to code this security infrastructure themselves from scratch. I think an application developer should only have to indicate the hashing iterations (overriding the paltry default of 1000).

This is what MembershipReboot does — it allows a developer using it to indicate the number of hashing iterations. If a number is not specified, the it uses the time-based iterations count described above (i.e. 64000 in 2012, 128000 in 2014, etc).

One last issue related to this is that this iteration count should be per-account. Think about it — a user “alice” who creates an account in 2012, say, should use the 64000 iteration count. But what if in 2014 the server hardware is upgraded. This means a new user, say “bob”, should use the higher iteration count of 128000. But if the iteration count is now 128000, then what happens when “alice” authenticates? Different users will need to use different iteration counts when authenticating. This then also means that the iteration count needs to be stored per-user and used when authenticating passwords.

This is what MembershipReboot does — it stores the iteration count used to hash the password along with the hashed password itself. This way a server can change its number of iterations over time and yet each user will authenticate with the iterations used at the time their password hash was calculated. And if a user ever changes their password, the current iteration value will be used.

And finally, MembershipReboot allows an application to require the user change their password periodically. This way user’s password can get updated with the current iteration count.

HTH

PS: One complaint about implementing an expensive password hashing operation is that this leaves your server open to a denial of service attack. If an attacker were to mount an automated attack against the login page of the application then the server would be bogged down in password hashing operations. Well, this needs to be prevented and I just wrote another post which describes why this prevention is necessary regardless of how passwords are being hashed.

Edit: More references:

14 Comments leave one →
  1. February 10, 2014 2:21 am

    Reblogged this on leastprivilege.com.

  2. Erik permalink
    February 10, 2014 12:47 pm

    1 second? Seriously? If you have a very busy site, you expect the server to sit there for 1 second for each user, particularly when you might have hundreds of users logging in at any given time? That seems impractical on a site of any magnitude. What about sites on shared servers? You really want these sites sucking up CPU every time a user logs in? I just can’t see this happening.

    • February 10, 2014 1:00 pm

      It’s up to your application to determine this tradeoff between the hashing delay/overhead and the amount of effort you want to require an attacker when they’re doing offline brute force/rainbow table attacks on the passwords. 1 second, like I said, seems to be a reasonable balance between the two.

      This has some more interesting info: https://github.com/jsteven/psm/blob/master/presentations/Secure%20Password%20Storage%20AUS.pptx.pdf

      Also, if this is a big concern for you then this should motivate the use of dedicated, external identity providers for single signon.

      • Sillyloopers permalink
        July 11, 2015 8:53 pm

        1 second raw recursion is silly. The point is to try to make it more costly to brute force and looping enough so that instead of doing 5000 hashes per second they can do 50 or 500 looped hashes is “good enough”. This really just makes it most costly (time) to build rainbow tables for short pass lengths and dictionary attacks; its not needed to make such an attack last two universe deaths vs 1

        • July 12, 2015 10:08 am

          MembershipReboot doesn’t do the lopping (and there’s no recursion) — it uses PBKDF2. As for the time, it’s about work factor and providing your app time once you’re compromised. I’m sure all those companies recently attacked wished they had more time to deal with the compromise, and that’s the point of a higher work factor.

  3. July 19, 2015 12:03 pm

    Does MembershipReboot allow you to override the hashing algorithm? I’d like to migrate my old membership, but need to preserve their ability to authenticate with their existing password.

Trackbacks

  1. How MembershipReboot mitigates login and two factor authentication brute force attacks | brockallen
  2. Decaying Code | Community Update 2014-02-10 – #rest API versioning, building a file server with #owin, #helios and ElasticSearch with #D3js
  3. Concerns with two factor authentication in ASP.NET Identity v2 | brockallen
  4. Introducing IdentityReboot | brockallen
  5. ASP.Net Identity Security | code4cake

Leave a comment